1. Introduction
This section is non-normative.
Graphics Processing Units, or GPUs for short,have been essential in enabling rich rendering and computational applications in personal computing.WebGPU is an API that exposes the capabilities of GPU hardware for the Web.The API is designed from the ground up to efficiently map to (post-2014) native GPU APIs.WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapter
s. It provides a connection to an adapter via GPUDevice
, which manages resources, and the device’s GPUQueue
s, which execute commands. GPUDevice
may have its own memory with high-speed access to the processing units. GPUBuffer
and GPUTexture
are the physical resources backed by GPU memory. GPUCommandBuffer
and GPURenderBundle
are containers for user-recorded commands. GPUShaderModule
contains shader code. The other resources,such as GPUSampler
or GPUBindGroup
, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffer
s by feeding data through a pipeline,which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware.Most of the state of a pipeline is defined bya GPURenderPipeline
or a GPUComputePipeline
object. The state not includedin these pipeline objects is set during encoding with commands,such as beginRenderPass()
or setBlendConstant()
.
2. Malicious use considerations
This section is non-normative. It describes the risks associated with exposing this API on the Web.
2.1. Security Considerations
The security requirements for WebGPU are the same as ever for the web, and are likewise non-negotiable.The general approach is strictly validating all the commands before they reach GPU,ensuring that a page can only work with its own data.
2.1.1. CPU-based undefined behavior
A WebGPU implementation translates the workloads issued by the user into API commands specificto the target platform. Native APIs specify the valid usage for the commands(for example, see vkCreateDescriptorSetLayout)and generally don’t guarantee any outcome if the valid usage rules are not followed.This is called "undefined behavior", and it can be exploited by an attacker to access memorythey don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input.An implementation has to validate all the input from the user and only reach the driverwith the valid workloads. This document specifies all the error conditions and handling semantics.For example, specifying the same buffer with intersecting ranges in both "source" and "destination"of copyBufferToBuffer()
results in GPUCommandEncoder
generating an error, and no other operation occurring.
See for more information about error handling.
2.1.2. GPU-based undefined behavior
WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs,some of the shader instructions may result in undefined behavior on the GPU.In order to address that, the shader instruction set and its defined behaviors arestrictly defined by WebGPU. When a shader is provided to createShaderModule()
,the WebGPU implementation has to validate itbefore doing any translation (to platform-specific shaders) or transformation passes.
2.1.3. Uninitialized data
Generally, allocating new memory may expose the leftover data of other applications running on the system.In order to address that, WebGPU conceptually initializes all the resources to zero, although in practicean implementation may skip this step if it sees the developer initializing the contents manually.This includes variables and shared workgroup memory inside shaders.
The precise mechanism of clearing the workgroup memory can differ between platforms.If the native API does not provide facilities to clear it, the WebGPU implementation transforms the computeshader to first do a clear across all invocations, synchronize them, and continue executing developer’s code.
NOTE:
The initialization status of a resource used in a queue operation can only be known when the operation is enqueued (not when it is encoded into a command buffer, for example). Therefore, some implementations will require an unoptimized late-clear at enqueue time (e.g. clearing a texture, rather than changing GPULoadOp
"load"
to "clear"
).
As a result, all implementations should issue a developer console warning about this potential performance penalty, even if there is no penalty in that implementation.
2.1.4. Out-of-bounds access in shaders
Shaders can access physical resources either directly(for example, as a "uniform"
GPUBufferBinding
), or via texture units,which are fixed-function hardware blocks that handle texture coordinate conversions.Validation in the WebGPU API can only guarantee that all the inputs to the shader are provided andthey have the correct usage and types.The WebGPU API can not guarantee that the data is accessed within boundsif the texture units are not involved.
In order to prevent the shaders from accessing GPU memory an application doesn’t own,the WebGPU implementation may enable a special mode (called "robust buffer access") in the driverthat guarantees that the access is limited to buffer bounds.
Alternatively, an implementation may transform the shader code by inserting manual bounds checks.When this path is taken, the out-of-bound checks only apply to array indexing. They aren’t neededfor plain field access of shader structures due to the minBindingSize
validation on the host side.
If the shader attempts to load data outside of physical resource bounds,the implementation is allowed to:
-
return a value at a different location within the resource bounds
-
return a value vector of "(0, 0, 0, X)" with any "X"
-
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds,the implementation is allowed to:
-
write the value to a different location within the resource bounds
-
discard the write operation
-
partially discard the draw or dispatch call
2.1.5. Invalid data
When uploading floating-point data from CPU to GPU,or generating it on the GPU, we may end up with a binary representation that doesn’t correspondto a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case issubject to the accuracy of the GPU hardware implementation of the IEEE-754 standard.WebGPU guarantees that introducing invalid floating-point numbers would only affect the resultsof arithmetic computations and will not have other side effects.
2.1.6. Driver bugs
GPU drivers are subject to bugs like any other software. If a bug occurs, an attackercould possibly exploit the incorrect behavior of the driver to get access to unprivileged data.In order to reduce the risk, the WebGPU working group will coordinate with GPU vendorsto integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process,like it was done for WebGL.WebGPU implementations are expected to have workarounds for some of the discovered bugs,and disable WebGPU on drivers with known bugs that can’t be worked around.
2.1.7. Timing attacks
2.1.7.1. Content-timeline timing
WebGPU is designed to later support multi-threaded use via Web Workers. As such, it is designed not to openthe users to modern high-precision timing attacks. Some of the objects,like GPUBuffer
or GPUQueue
, have shared state which can be simultaneously accessed.This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer
from multiple Web Workers, which makes the thread scheduling observable.
WebGPU addresses this by limiting the ability to deserialize (or share) objects only tothe agents inside the agent cluster, and only ifthe cross-origin isolated policies are in place.This restriction matches the mitigations against the malicious SharedArrayBuffer
use. Similarly, the user agent may alsoserialize the agents sharing any handles to prevent any concurrency entirely.
In the end, the attack surface for races on shared state in WebGPU will bea small subset of the SharedArrayBuffer
attacks.
2.1.7.2. Device/queue-timeline timing
Writable storage buffers and other cross-invocation communication may be usable to constructhigh-precision timers on the queue timeline.
The optional "timestamp-query"
feature also provides high precisiontiming of GPU operations. To mitigate security and privacy concerns, the timing queryvalues are aligned to a lower precision: see current queue timestamp. Note in particular:
-
The device timeline typically runs in a process that is shared by multipleorigins, so cross-origin isolation (provided by COOP/COEP) does not provideisolation of device/queue-timeline timers.
-
Queue timeline work is issued from the device timeline, and may execute on GPU hardware thatdoes not provide the isolation expected of CPU processes (such as Meltdown mitigations).
-
GPU hardware is not typically susceptible to Spectre-style attacks, but WebGPU may beimplemented in software, and software implementations may run in a shared process, preventingisolation-based mitigations.
2.1.8. Row hammer attacks
Row hammer is a class of attacks that exploit theleaking of states in DRAM cells. It could be used on GPU.WebGPU does not have any specific mitigations in place, and relies on platform-level solutions,such as reduced memory refresh intervals.
2.1.9. Denial of service
WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limitthe available GPU memory to an application, in order to keep other applications responsive.For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure anapplication doesn’t cause GPU unresponsiveness for more than a few seconds.These measures are similar to those used in WebGL.
2.1.10. Workload identification
WebGPU provides access to constrained global resources shared between different programs(and web pages) running on the same machine. An application can try to indirectly probehow constrained these global resources are, in order to reason about workloads performedby other open web pages, based on the patterns of usage of these shared resources.These issues are generally analogous to issues with Javascript,such as system memory and CPU execution throughput. WebGPU does not provide any additionalmitigations for this.
2.1.11. Memory resources
WebGPU exposes fallible allocations from machine-global memory heaps, such as VRAM.This allows for probing the size of the system’s remaining available memory(for a given heap type) by attempting to allocate and watching for allocation failures.
GPUs internally have one or more (typically only two) heaps of memoryshared by all running applications. When a heap is depleted, WebGPU would fail to create a resource.This is observable, which may allow a malicious application to guess what heapsare used by other applications, and how much they allocate from them.
2.1.12. Computation resources
If one site uses WebGPU at the same time as another, it may observe the increasein time it takes to process some work. For example, if a site constantly submitscompute workloads and tracks completion of work on the queue,it may observe that something else also started using the GPU.
A GPU has many parts that can be tested independently, such as the arithmetic units,texture sampling units, atomic units, etc. A malicious application may sense whensome of these units are stressed, and attempt to guess the workload of anotherapplication by analyzing the stress patterns. This is analogous to the realitiesof CPU execution of Javascript.
2.1.13. Abuse of capabilities
Malicious sites could abuse the capabilities exposed by WebGPU to runcomputations that don’t benefit the user or their experience and instead onlybenefit the site. Examples would be hidden crypto-mining, password crackingor rainbow tables computations.
It is not possible to guard against these types of uses of the API because thebrowser is not able to distinguish between valid workloads and abusiveworkloads. This is a general problem with all general-purpose computationcapabilities on the Web: JavaScript, WebAssembly or WebGL. WebGPU only makessome workloads easier to implement, or slightly more efficient to run thanusing WebGL.
To mitigate this form of abuse, browsers can throttle operations on backgroundtabs, could warn that a tab is using a lot of resource, and restrict whichcontexts are allowed to use WebGPU.
User agents can heuristically issue warnings to users about high power use,especially due to potentially malicious usage.If a user agent implements such a warning, it should include WebGPU usage inits heuristics, in addition to JavaScript, WebAssembly, WebGL, and so on.
2.2. Privacy Considerations
The privacy considerations for WebGPU are similar to those of WebGL. GPU APIs are complex and mustexpose various aspects of a device’s capabilities out of necessity in order to enable developers totake advantage of those capabilities effectively. The general mitigation approach involvesnormalizing or binning potentially identifying information and enforcing uniform behavior wherepossible.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
2.2.1. Machine-specific features and limits
WebGPU can expose a lot of detail on the underlying GPU architecture and the device geometry.This includes available physical adapters, many limits on the GPU and CPU resourcesthat could be used (such as the maximum texture size), and any optional hardware-specificcapabilities that are available.
User agents are not obligated to expose the real hardware limits, they are in full control ofhow much the machine specifics are exposed. One strategy to reduce fingerprinting is binningall the target platforms into a few number of bins. In general, the privacy impact of exposingthe hardware limits matches the one of WebGL.
The default limits are also deliberately high enoughto allow most applications to work without requesting higher limits.All the usage of the API is validated according to the requested limits,so the actual hardware capabilities are not exposed to the users by accident.
2.2.2. Machine-specific artifacts
There are some machine-specific rasterization/precision artifacts and performance differencesthat can be observed roughly in the same way as in WebGL. This applies to rasterization coverageand patterns, interpolation precision of the varyings between shader stages, compute unit scheduling,and more aspects of execution.
Generally, rasterization and precision fingerprints are identical across most or allof the devices of each vendor. Performance differences are relatively intractable,but also relatively low-signal (as with JS execution performance).
Privacy-critical applications and user agents should utilize software implementations to eliminatesuch artifacts.
2.2.3. Machine-specific performance
Another factor for differentiating users is measuring the performance of specificoperations on the GPU. Even with low precision timing, repeated execution of an operationcan show if the user’s machine is fast at specific workloads.This is a fairly common vector (present in both WebGL and Javascript),but it’s also low-signal and relatively intractable to truly normalize.
WebGPU compute pipelines expose access to GPU unobstructed by the fixed-function hardware.This poses an additional risk for unique device fingerprinting. User agents can take stepsto dissociate logical GPU invocations with actual compute units to reduce this risk.
2.2.4. User Agent State
This specification doesn’t define any additional user-agent state for an origin.However it is expected that user agents will have compilation caches for the result of expensivecompilation like GPUShaderModule
, GPURenderPipeline
and GPUComputePipeline
.These caches are important to improve the loading time of WebGPU applications after the firstvisit.
For the specification, these caches are indifferentiable from incredibly fast compilation, butfor applications it would be easy to measure how long createComputePipelineAsync()
takes to resolve. This can leak information across origins (like "did the user access a site withthis specific shader") so user agents should follow the best practices in storage partitioning.
The system’s GPU driver may also have its own cache of compiled shaders and pipelines. User agentsmay want to disable these when at all possible, or add per-partition data to shaders in ways thatwill make the GPU driver consider them different.
2.2.5. Driver bugs
In addition to the concerns outlined in Security Considerations, driverbugs may introduce differences in behavior that can be observed as a method of differentiatingusers. The mitigations mentioned in Security Considerations apply here as well, includingcoordinating with GPU vendors and implementing workarounds for known issues in the user agent.
2.2.6. Adapter Identifiers
Past experience with WebGL has demonstrated that developers have a legitimate need to be able toidentify the GPU their code is running on in order to create and maintain robust GPU-based content.For example, to identify adapters with known driver bugs in order to work around them or to avoidfeatures that perform more poorly than expected on a given class of hardware.
But exposing adapter identifiers also naturally expands the amount of fingerprinting informationavailable, so there’s a desire to limit the precision with which we identify the adapter.
There are several mitigations that can be applied to strike a balance between enabling robustcontent and preserving privacy. First is that user agents can reduce the burden on developers byidentifying and working around known driver issues, as they have since browsers began making use ofGPUs.
When adapter identifiers are exposed by default they should be as broad as possible while stillbeing useful. Possibly identifying, for example, the adapter’s vendor and general architecturewithout identifying the specific adapter in use. Similarly, in some cases identifiers for an adapterthat is considered a reasonable proxy for the actual adapter may be reported.
In cases where full and detailed information about the adapter is useful (for example: when filingbug reports) the user can be asked for consent to reveal additional information about their hardwareto the page.
Finally, the user agent will always have the discretion to not report adapter identifiers at all ifit considers it appropriate, such as in enhanced privacy modes.
3. Fundamentals
3.1. Conventions
3.1.1. Syntactic Shorthands
In this specification, the following syntactic shorthands are used:
- The
.
("dot") syntax, common in programming languages. -
The phrasing "
Foo.Bar
" means "theBar
member of the value (or interface)Foo
."IfFoo
is an ordered map, asserts that the keyBar
exists.Editorial note: Some phrasing in this spec may currently assume this resolves to
undefined
ifBar
doesn’t exist.The phrasing "
Foo.Bar
is provided" means"theBar
member exists in the map valueFoo
" - The
?.
("optional chaining") syntax, adopted from JavaScript. -
The phrasing "
Foo?.Bar
" means"ifFoo
isnull
orundefined
orBar
does not exist inFoo
,undefined
; otherwise,Foo.Bar
".For example, where
buffer
is aGPUBuffer
,buffer?.\[[device]].\[[adapter]]
means"ifbuffer
isnull
orundefined
, thenundefined
; otherwise,the\[[adapter]]
internal slot of the\[[device]]
internal slot ofbuffer
. - The
??
("nullish coalescing") syntax, adopted from JavaScript. -
The phrasing "
x
??y
" means "x
, ifx
is not null/undefined, andy
otherwise". - slot-backed attribute
-
A WebIDL attribute which is backed by an internal slot of the same name.It may or may not be mutable.
3.1.2. WebGPU Interfaces
A WebGPU interface defines a WebGPU object. It can be used:
-
On the content timeline where it was created, where it is a JavaScript-exposed WebIDL interface.
-
On all other timelines, where only immutable properties can be accessed.
The following special property types can be defined on WebGPU interfaces:
- immutable property
-
A read-only slot set during initialization of the object. It can be accessed from any timeline.
Note: Since the slot is immutable, implementations may have a copy on multiple timelines, as needed. Immutable properties are defined in this way to avoid describing multiple copies in this spec.
If named
[[with brackets]]
, it is an internal slot.If namedwithoutBrackets
, it is areadonly
slot-backed attribute. - content timeline property
-
A property which is only accessible from the content timeline where the object was created.
If named
[[with brackets]]
, it is an internal slot.If namedwithoutBrackets
, it is a slot-backed attribute.
Any interface which includes GPUObjectBase
is a WebGPU interface.
interface mixin GPUObjectBase {attribute USVString label;};
To create a new WebGPU object(GPUObjectBase
parent, interface T, GPUObjectDescriptorBase
descriptor) (where T extends GPUObjectBase
):
-
Let device be parent.
[[device]]
. -
Let object be a new instance of T.
-
Let internals be a new (uninitialized) instance of the type of T.
\[[internals]]
(which may overrideGPUObjectBase
.[[internals]]
)that is accessible only from the device timeline of device. -
Set object.
[[device]]
to device. -
Set object.
[[internals]]
to internals. -
Set object.
label
to descriptor.label
. -
Return [object, internals].
GPUObjectBase
has the following immutable properties:
[[internals]]
, of type internal object, readonly, overridable-
The internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
For each interface that subtypes
GPUObjectBase
, this may be overridden with a subtypeof internal object. This slot is initially set to an uninitialized object of that type. [[device]]
, of type device, readonly-
The device that owns the internal object.
Operations on the contents of this object assert they are running on the device timeline, and that the device is valid.
GPUObjectBase
has the following content timeline properties:
label
, of type USVString-
A developer-provided label which is used in an implementation-defined way.It can be used by the browser, OS, or other tools to helpidentify the underlying internal object to the developer.Examples include displaying the label in
GPUError
messages, console warnings,browser developer tools, and platform debugging utilities.NOTE:
Implementations should use labels to enhance error messages by using them to identify WebGPU objects.
However, this need not be the only way of identifying objects: implementations should also use other available information, especially when no label is available. For example:
-
The label of the parent
GPUTexture
when printing aGPUTextureView
. -
The label of the parent
GPUCommandEncoder
when printing aGPURenderPassEncoder
orGPUComputePassEncoder
. -
The label of the source
GPUCommandEncoder
when printing aGPUCommandBuffer
. -
The label of the source
GPURenderBundleEncoder
when printing aGPURenderBundle
.
NOTE:
The
label
is a property of theGPUObjectBase
. TwoGPUObjectBase
"wrapper" objects have completely separate label states, even if they refer to the same underlying object (for example returned bygetBindGroupLayout()
). Thelabel
property will not change except by being set from JavaScript.This means one underlying object could be associated with multiple labels. This specification does not define how the label is propagated to the device timeline. How labels are used is completely implementation-defined: error messages could show the most recently set label, all known labels, or no labels at all.
It is defined as a
USVString
because some user agents may supply it to the debug facilities of the underlying native APIs. -
NOTE:
Ideally WebGPU interfaces should not prevent their parent objects, such as the [[device]]
that owns them, from being garbage collected. This cannot be guaranteed, however, as holding a strong reference to a parent object may be required in some implementations.
As a result, developers should assume that a WebGPU interface may not be garbage collected until all child objects of that interface have also been garbage collected. This may cause some resources to remain allocated longer than anticipated.
Calling the destroy
method on a WebGPU interface (such as GPUDevice
.destroy()
or GPUBuffer
.destroy()
) should be favored over relying on garbage collection if predictable release of allocated resources is needed.
3.1.3. Internal Objects
An internal object tracks state of WebGPU objects that may only be used onthe device timeline, in device timeline slots, which may be mutable.
- device timeline slot
-
An internal slot which is only accessible from the device timeline.
All reads/writes to the mutable state of an internal object occur from steps executing on asingle well-ordered device timeline. These steps may have been issued from a content timeline algorithm on any of multiple agents.
Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).
3.1.4. Object Descriptors
An object descriptor holds the information needed to create an object,which is typically done via one of the create*
methods of GPUDevice
.
dictionary {
GPUObjectDescriptorBase USVString label = "";};
GPUObjectDescriptorBase
has the following members:
label
, of type USVString, defaulting to""
-
The initial value of
GPUObjectBase.label
.
3.2. Asynchrony
3.2.1. Invalid Internal Objects & Contagious Invalidity
Object creation operations in WebGPU don’t return promises, but nonetheless are internallyasynchronous. Returned objects refer to internal objects which are manipulated on a device timeline. Rather than fail with exceptions or rejections, most errors that occur on a device timeline are communicated through GPUError
s generated on the associated device.
Internal objects are either valid or invalid.An invalid object will never become valid at a later time,but some valid objects may become invalid.
Objects are invalid from creation if it wasn’t possible to create them.This can happen, for example, if the object descriptor doesn’t describe a validobject, or if there is not enough memory to allocate a resource.It can also happen if an object is created with or from another invalid object(for example calling createView()
on an invalid GPUTexture
)(for example the GPUTexture
of a createView()
call):this case is referred to as contagious invalidity.
Internal objects of most types cannot become invalid after they are created, but stillmay become unusable, e.g. if the owning device is lost or destroyed
, or the object has a special internal state,like buffer state "destroyed".
Internal objects of some types can become invalid after they are created; specifically, devices, adapters, GPUCommandBuffer
s, and command/pass/bundle encoders.
A given GPUObjectBase
object is valid to use with a targetObject if and only if the following requirements are met:
-
object must be valid.
-
object.
[[device]]
must be valid. -
object.
[[device]]
must equal targetObject.[[device]]
.
3.2.2. Promise Ordering
Several operations in WebGPU return promises.
-
GPU
.requestAdapter()
-
GPUAdapter
.requestDevice()
-
GPUAdapter
.requestAdapterInfo()
-
GPUDevice
.createComputePipelineAsync()
-
GPUDevice
.createRenderPipelineAsync()
-
GPUBuffer
.mapAsync()
-
GPUShaderModule
.getCompilationInfo()
-
GPUQueue
.onSubmittedWorkDone()
-
GPUDevice
.lost
-
GPUDevice
.popErrorScope()
WebGPU does not make any guarantees about the order in which these promises settle(resolve or reject), except for the following:
-
For some
GPUQueue
q, if p1 = q.onSubmittedWorkDone()
is called before p2 = q.onSubmittedWorkDone()
, then p1 must settle before p2. -
For some
GPUQueue
q andGPUBuffer
b on the sameGPUDevice
, if p1 = b.mapAsync()
is called before p2 = q.onSubmittedWorkDone()
, then p1 must settle before p2.
Applications must not rely on any other promise settlement ordering.
3.3. Coordinate Systems
Rendering operations use the following coordinate systems:
-
Normalized device coordinates (or NDC) have three dimensions, where:
-
-1.0 ≤ x ≤ 1.0
-
-1.0 ≤ y ≤ 1.0
-
0.0 ≤ z ≤ 1.0
-
The bottom-left corner is at (-1.0, -1.0, z).
-
-
Clip space coordinates have four dimensions: (x, y, z, w)
-
Clip space coordinates are used for the the clip position of a vertex (i.e. the position output of a vertex shader),and for the clip volume.
-
Normalized device coordinates and clip space coordinates are related as follows:If point p = (p.x, p.y, p.z, p.w) is in the clip volume, then its normalized device coordinates are (p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w).
-
-
Framebuffer coordinates address the pixels in the framebuffer
-
They have two dimensions.
-
Each pixel extends 1 unit in x and y dimensions.
-
The top-left corner is at (0.0, 0.0).
-
x increases to the right.
-
y increases down.
-
See § 17 Render Passes and § 23.3.5 Rasterization.
-
-
Viewport coordinates combine framebuffer coordinates in x and y dimensions,with depth in z.
-
Normally 0.0 ≤ z ≤ 1.0, but this can be modified by setting
[[viewport]]
.minDepth
andmaxDepth
viasetViewport()
-
-
Fragment coordinates match viewport coordinates.
-
UV coordinates are used to sample textures, and have two dimensions:
-
0 ≤ u ≤ 1.0
-
0 ≤ v ≤ 1.0
-
(0.0, 0.0) is in the first texel in texture memory address order.
-
(1.0, 1.0) is in the last texel texture memory address order.
-
-
Window coordinates, or present coordinates,match framebuffer coordinates, and are used when interacting withan external display or conceptually similar interface.
Note: WebGPU’s coordinate systems match DirectX’s coordinate systems in a graphics pipeline.
3.4. Programming Model
3.4.1. Timelines
WebGPU’s behavior is described in terms of "timelines".Each operation (defined as algorithms) occurs on a timeline.Timelines clearly define both the order of operations, and which state isavailable to which operations.
Note: This "timeline" model describes the constraints of the multi-process models ofbrowser engines (typically with a "content process" and "GPU process"), as wellas the GPU itself as a separate execution unit in many implementations.Implementing WebGPU does not require timelines to execute in parallel, so doesnot require multiple processes, or even multiple threads.
- Content timeline
-
Associated with the execution of the Web script.It includes calling all methods described by this specification.
To issue steps to the content timeline from an operation on
GPUDevice
device
, queue a global task for GPUDevicedevice
with those steps. - Device timeline
-
Associated with the GPU device operationsthat are issued by the user agent.It includes creation of adapters, devices, and GPU resourcesand state objects, which are typically synchronous operations from the pointof view of the user agent part that controls the GPU,but can live in a separate OS process.
- Queue timeline
-
Associated with the execution of operationson the compute units of the GPU. It includes actual draw, copy,and compute jobs that run on the GPU.
The following show the styling of steps and values associated with each timeline. This styling is non-normative; the specification text always describes the association.
- Immutable value example definition
-
Can be used on any timeline.
- Content-timeline example definition
-
Can only be used on the content timeline.
- Device-timeline example definition
-
Can only be used on the device timeline.
- Queue-timeline example definition
-
Can only be used on the queue timeline.
Steps executed on the content timeline look like this.
Immutable value example definition. Content-timeline example definition.
Steps executed on the device timeline look like this.
Immutable value example definition. Device-timeline example definition.
Steps executed on the queue timeline look like this.
Immutable value example definition. Queue-timeline example definition.
In this specification, asynchronous operations are used when the return valuedepends on work that happens on any timeline other than the Content timeline.They are represented by promises and events in API.
GPUComputePassEncoder.dispatchWorkgroups()
:
-
User encodes a
dispatchWorkgroups
command by calling a method of theGPUComputePassEncoder
which happens on the Content timeline. -
User issues
GPUQueue.submit()
that hands overtheGPUCommandBuffer
to the user agent, which processes iton the Device timeline by calling the OS driver to do a low-level submission. -
The submit gets dispatched by the GPU invocation scheduler onto theactual compute units for execution, which happens on the Queue timeline.
GPUDevice.createBuffer()
:
-
User fills out a
GPUBufferDescriptor
and creates aGPUBuffer
with it,which happens on the Content timeline. -
User agent creates a low-level buffer on the Device timeline.
GPUBuffer.mapAsync()
:
-
User requests to map a
GPUBuffer
on the Content timeline andgets a promise in return. -
User agent checks if the buffer is currently used by the GPUand makes a reminder to itself to check back when this usage is over.
-
After the GPU operating on Queue timeline is done using the buffer,the user agent maps it to memory and resolves the promise.
3.4.2. Memory Model
This section is non-normative.
Once a GPUDevice
has been obtained during an application initialization routine,we can describe the WebGPU platform as consisting of the following layers:
-
User agent implementing the specification.
-
Operating system with low-level native API drivers for this device.
-
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory typesthat the user agent needs to consider when implementing the specification:
-
The script-owned memory, such as an
ArrayBuffer
created by the script,is generally not accessible by a GPU driver. -
A user agent may have different processes responsible for runningthe content and communication to the GPU driver.In this case, it uses inter-process shared memory to transfer data.
-
Dedicated GPUs have their own memory with high bandwidth,while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of typethat is efficient for computation or rendering by the GPU.When the user needs to provide new data to the GPU,the data may first need to cross the process boundary in order to reachthe user agent part that communicates with the GPU driver.Then it may need to be made visible to the driver,which sometimes requires a copy into driver-allocated staging memory.Finally, it may need to be transferred to the dedicated GPU memory,potentially changing the internal layout into onethat is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practicethe implementation may not need to cross the process boundary,or may be able to expose the driver-managed memory directly tothe user behind an ArrayBuffer
, thus avoiding any data copies.
3.4.3. Resource Usages
A physical resource can be used on GPU with an internal usage:
- input
-
Buffer with input data for draw or dispatch calls. Preserves the contents.Allowed by buffer
INDEX
, bufferVERTEX
, or bufferINDIRECT
. - constant
-
Resource bindings that are constant from the shader point of view. Preserves the contents.Allowed by buffer
UNIFORM
or textureTEXTURE_BINDING
. - storage
-
Writable storage resource binding.Allowed by buffer
STORAGE
or textureSTORAGE_BINDING
. - storage-read
-
Read-only storage resource bindings. Preserves the contents.Allowed by buffer
STORAGE
or textureSTORAGE_BINDING
. - attachment
-
Texture used as an output attachment in a render pass.Allowed by texture
RENDER_ATTACHMENT
. - attachment-read
-
Texture used as a read-only attachment in a render pass. Preserves the contents.Allowed by texture
RENDER_ATTACHMENT
.
We define subresource to be either a whole buffer, or a texture subresource.
Some internal usages are compatible with others. A subresource can be in a state that combines multiple usages together. We consider a list U to be a compatible usage list if (and only if) it satisfies any of the following rules:
-
Each usage in U is input, constant, storage-read, or attachment-read.
-
Each usage in U is storage.
-
U contains exactly one element: attachment.
Enforcing that the usages are only combined into a compatible usage list allows the API to limit when data races can occur in working with memory.That property makes applications written againstWebGPU more likely to run without modification on different platforms.
Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resourceinto the new state. In some cases, like within an open GPURenderPassEncoder
, such atransition is impossible due to the hardware limitations.We define these places as usage scopes.
The main usage rule is, for any one subresource, its list of internal usages within one usage scope must be a compatible usage list.
For example, binding the same buffer for storage as well as for input within the same GPURenderPassEncoder
would put the encoderas well as the owning GPUCommandEncoder
into the error state.This combination of usages does not make a compatible usage list.
Note: race condition of multiple writable storage buffer/texture usages in a single usage scope is allowed.
The subresources of textures included in the views provided to GPURenderPassColorAttachment.view
and GPURenderPassColorAttachment.resolveTarget
are considered to be used as attachment for the usage scope of this render pass.
3.4.4. Synchronization
For each subresource of a physical resource, its set of internal usage flags is tracked on the Queue timeline.
On the Queue timeline, there is an ordered sequence of usage scopes.For the duration of each scope, the set of internal usage flags of any given subresource is constant.A subresource may transition to new usages at the boundaries between usage scopes.
This specification defines the following usage scopes:
-
Outside of a pass (in
GPUCommandEncoder
), each (non-state-setting) command is one usage scope(e.g.copyBufferToTexture()
). -
In a compute pass, each dispatch command (
dispatchWorkgroups()
ordispatchWorkgroupsIndirect()
) is one usage scope.A subresource is "used" in the usage scope if it is potentially accessible by the command.Within a dispatch, for each bind group slot that is used by the currentGPUComputePipeline
's[[layout]]
, every subresource referenced bythat bind group is "used" in the usage scope.State-setting compute pass commands, like setBindGroup(),do not contribute directly to a usage scope; they instead change thestate that is checked in dispatch commands. -
One render pass is one usage scope.A subresource is "used" in the usage scope if it’s referenced by any(state-setting or non-state-setting) command. For example, in setBindGroup(),every subresource in
bindGroup
is "used" in the render pass’s usage scope.
The above should probably talk about GPU commands. But we don’t have a way toreference specific GPU commands (like dispatch) yet.
NOTE:
The above rules mean the following example resource usages are included in usage scope validation:
-
In a render pass, subresources used in any setBindGroup() call, regardless of whether the currently bound pipeline’sshader or layout actually depends on these bindings,or the bind group is shadowed by another 'set' call.
-
A buffer used in any
setVertexBuffer()
call, regardless of whether any draw call depends on this buffer,or this buffer is shadowed by another 'set' call. -
A buffer used in any
setIndexBuffer()
call, regardless of whether any draw call depends on this buffer,or this buffer is shadowed by another 'set' call. -
A texture subresource used as a color attachment, resolve attachment, ordepth/stencil attachment in
GPURenderPassDescriptor
bybeginRenderPass()
,regardless of whether the shader actually depends on these attachments. -
Resources used in bind group entries with visibility 0, or visible onlyto the compute stage but used in a render pass (or vice versa).
During command encoding, every usage of a subresource is recorded in one of the usage scopes in the command buffer.For each usage scope, the implementation performs usage scope validation by composing the list of all internal usage flags of each subresource used in the usage scope.If any of those lists is not a compatible usage list, GPUCommandEncoder.finish()
will generate a validation error.
3.5. Core Internal Objects
3.5.1. Adapters
An adapter identifies an implementation of WebGPU on the system:both an instance of compute/rendering functionality on theplatform underlying a browser, and an instance of a browser’s implementation ofWebGPU on top of that functionality.
Adapters do not uniquely represent underlying implementations:calling requestAdapter()
multiple times returns a different adapter object each time.
Each adapter object can only be used to create one device:upon a successful requestDevice()
, the adapter becomes invalid.Additionally, adapter objects may expire at any time.
Note: This ensures applications use the latest system state for adapter selection when creating a device.It also encourages robustness to more scenarios by making them look similar: first initialization,reinitialization due to an unplugged adapter, reinitialization due to a test GPUDevice.destroy()
call, etc.
An adapter may be considered a fallback adapter if it has significant performancecaveats in exchange for some combination of wider compatibility, more predictable behavior, orimproved privacy. It is not required that a fallback adapter is available on every system.
An adapter has the following internal slots:
[[features]]
, of type ordered set<GPUFeatureName
>, readonly-
The features which can be used to create devices on this adapter.
[[limits]]
, of type supported limits, readonly-
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default valuein supported limits.
[[fallback]]
, of type boolean-
If set to
true
indicates that the adapter is a fallback adapter.
Adapters are exposed via GPUAdapter
.
3.5.2. Devices
A device is the logical instantiation of an adapter,through which internal objects are created.It can be shared across multiple agents (e.g. dedicated workers).
A device is the exclusive owner of all internal objects created from it:when the device becomes invalid (is lost or destroyed
),it and all objects created on it (directly, e.g. createTexture()
, or indirectly, e.g. createView()
) becomeimplicitly unusable.
A device has the following internal slots:
[[adapter]]
, of type adapter, readonly-
The adapter from which this device was created.
[[features]]
, of type ordered set<GPUFeatureName
>, readonly-
The features which can be used on this device.No additional features can be used, even if the underlying adapter can support them.
[[limits]]
, of type supported limits, readonly-
The limits which can be used on this device.No better limits can be used, even if the underlying adapter can support them.
When a new device device is created from adapter adapter with GPUDeviceDescriptor
descriptor:
-
Set device.
[[adapter]]
to adapter. -
Set device.
[[features]]
to the set of values in descriptor.requiredFeatures
. -
Let device.
[[limits]]
be a supported limits object with the default values.For each (key, value) pair in descriptor.requiredLimits
, set themember corresponding to key in device.[[limits]]
to the better value of value or the default value in supported limits.
Any time the user agent needs to revoke access to a device, it calls lose the device(device
, "unknown"
) on the device’s device timeline,potentially ahead of other operations currently queued on that timeline.
If an operation fails with side effects that would observably change the stateof objects on the device or potentially corrupt internal implementation/driver state,the device should be lost to prevent these changes from being observable.
Note: For all device losses not initiated by the application (via destroy()
,user agents should consider issuing developer-visible warnings unconditionally,even if the lost
promise is handled.These scenarios should be rare, and the signal is vital to developers because most of the WebGPUAPI tries to behave like nothing is wrong to avoid interrupting the runtime flow of the application:no validation errors are raised, most promises resolve normally, etc.
To lose the device(device, reason):
-
Make device invalid.
-
Let gpuDevice be the content timeline
GPUDevice
corresponding to device.Define this more rigorously.
-
Issue the following steps on the content timeline of gpuDevice:
-
Resolve device.
lost
with a newGPUDeviceLostInfo
withreason
set to reason andmessage
set to an implementation-defined value.Note:
message
should not disclose unnecessary user/systeminformation and should never be parsed by applications.
-
-
Complete any outstanding
mapAsync()
steps. -
Complete any outstanding
onSubmittedWorkDone()
steps.
Note: No errors are generated after device loss. See .
Devices are exposed via GPUDevice
.
3.6. Optional Capabilities
WebGPU adapters and devices have capabilities, whichdescribe WebGPU functionality that differs between different implementations,typically due to hardware or system software constraints.A capability is either a feature or a limit.
A user agent must not reveal more than 32 distinguishable configurations or buckets.
The capabilities of an adapter must conform to § 4.2.1 Adapter Capability Guarantees.
Only supported capabilities may be requested in requestDevice()
;requesting unsupported capabilities results in failure.
The capabilities of a device are exactly the ones which were requested in requestDevice()
. These capabilities are enforced regardless of thecapabilities of the adapter.
For privacy considerations, see § 2.2.1 Machine-specific features and limits.
3.6.1. Features
A feature is a set of optional WebGPU functionality that is not supportedon all implementations, typically due to hardware or system software constraints.
Functionality that is part of a feature may only be used if the feature was requested at devicecreation (in requiredFeatures
).Otherwise, using existing API surfaces in a new way typically results in a validation error,and using optional API surfaces results in the following:
-
Using a new method or enum value always throws a
TypeError
. -
Using a new dictionary member with a (correctly-typed) non-default value typically results in a validation error.
-
Using a new WGSL
enable
directive always results in acreateShaderModule()
validation error.
A GPUFeatureName
feature is enabled for a GPUObjectBase
object if and only if object.[[device]]
.[[features]]
contains feature.
See the Feature Index for a description of the functionality each feature enables.
3.6.2. Limits
Each limit is a numeric limit on the usage of WebGPU on a device.
Each limit has a default value.Every adapter is guaranteed to support the default value or better.The default is used if a value is not explicitly specified in requiredLimits
.
One limit value may be better than another.A better limit value always relaxes validation, enabling strictlymore programs to be valid. For each limit class, "better" is defined.
Different limits have different limit classes:
- maximum
-
The limit enforces a maximum on some value passed into the API.
Higher values are better.
May only be set to values ≥ the default.Lower values are clamped to the default.
- alignment
-
The limit enforces a minimum alignment on some value passed into the API; that is,the value must be a multiple of the limit.
Lower values are better.
May only be set to powers of 2 which are ≤ the default.Values which are not powers of 2 are invalid.Higher powers of 2 are clamped to the default.
Note: Setting "better" limits may not necessarily be desirable, as they may have a performance impact.Because of this, and to improve portability across devices and implementations,applications should generally request the "worst" limits that work for their content(ideally, the default values).
A supported limits object has a value for every limit defined by WebGPU:
Limit name | Type | Limit class | Default |
---|---|---|---|
maxTextureDimension1D | GPUSize32 | maximum | 8192 |
The maximum allowed value for the size .width of a texture created with dimension "1d" . | |||
maxTextureDimension2D | GPUSize32 | maximum | 8192 |
The maximum allowed value for the size .width and size .height of a texture created with dimension "2d" . | |||
maxTextureDimension3D | GPUSize32 | maximum | 2048 |
The maximum allowed value for the size .width, size .height and size .depthOrArrayLayers of a texture created with dimension "3d" . | |||
maxTextureArrayLayers | GPUSize32 | maximum | 256 |
The maximum allowed value for the size .depthOrArrayLayers of a texture created with dimension "2d" . | |||
maxBindGroups | GPUSize32 | maximum | 4 |
The maximum number of GPUBindGroupLayouts allowed in bindGroupLayouts when creating a GPUPipelineLayout . | |||
maxBindGroupsPlusVertexBuffers | GPUSize32 | maximum | 24 |
The maximum number of bind group and vertex buffer slots used simultaneously, counting any empty slots below the highest index. Validated in createRenderPipeline() and in draw calls. | |||
maxBindingsPerBindGroup | GPUSize32 | maximum | 1000 |
The number of binding indices available when creating a GPUBindGroupLayout . Note: This limit is normative, but arbitrary. With the default binding slot limits, it is impossible to use 1000 bindings in one bind group, but this allows | |||
maxDynamicUniformBuffersPerPipelineLayout | GPUSize32 | maximum | 8 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers with dynamic offsets. See Exceeds the binding slot limits. | |||
maxDynamicStorageBuffersPerPipelineLayout | GPUSize32 | maximum | 4 |
The maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers with dynamic offsets. See Exceeds the binding slot limits. | |||
maxSampledTexturesPerShaderStage | GPUSize32 | maximum | 16 |
For each possible GPUShaderStage stage , the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are sampled textures. See Exceeds the binding slot limits. | |||
maxSamplersPerShaderStage | GPUSize32 | maximum | 16 |
For each possible GPUShaderStage stage , the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are samplers. See Exceeds the binding slot limits. | |||
maxStorageBuffersPerShaderStage | GPUSize32 | maximum | 8 |
For each possible GPUShaderStage stage , the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage buffers. See Exceeds the binding slot limits. | |||
maxStorageTexturesPerShaderStage | GPUSize32 | maximum | 4 |
For each possible GPUShaderStage stage , the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are storage textures. See Exceeds the binding slot limits. | |||
maxUniformBuffersPerShaderStage | GPUSize32 | maximum | 12 |
For each possible GPUShaderStage stage , the maximum number of GPUBindGroupLayoutEntry entries across a GPUPipelineLayout which are uniform buffers. See Exceeds the binding slot limits. | |||
maxUniformBufferBindingSize | GPUSize64 | maximum | 65536 bytes |
The maximum GPUBufferBinding .size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "uniform" . | |||
maxStorageBufferBindingSize | GPUSize64 | maximum | 134217728 bytes (128 MiB) |
The maximum GPUBufferBinding .size for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "storage" or "read-only-storage" . | |||
minUniformBufferOffsetAlignment | GPUSize32 | alignment | 256 bytes |
The required alignment for GPUBufferBinding .offset and the dynamic offsets provided in setBindGroup(), for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "uniform" . | |||
minStorageBufferOffsetAlignment | GPUSize32 | alignment | 256 bytes |
The required alignment for GPUBufferBinding .offset and the dynamic offsets provided in setBindGroup(), for bindings with a GPUBindGroupLayoutEntry entry for which entry.buffer ?.type is "storage" or "read-only-storage" . | |||
maxVertexBuffers | GPUSize32 | maximum | 8 |
The maximum number of buffers when creating a GPURenderPipeline . | |||
maxBufferSize | GPUSize64 | maximum | 268435456 bytes (256 MiB) |
The maximum size of size when creating a GPUBuffer . | |||
maxVertexAttributes | GPUSize32 | maximum | 16 |
The maximum number of attributes in total across buffers when creating a GPURenderPipeline . | |||
maxVertexBufferArrayStride | GPUSize32 | maximum | 2048 bytes |
The maximum allowed arrayStride when creating a GPURenderPipeline . | |||
maxInterStageShaderComponents | GPUSize32 | maximum | 60 |
The maximum allowed number of components of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). | |||
maxInterStageShaderVariables | GPUSize32 | maximum | 16 |
The maximum allowed number of input or output variables for inter-stage communication (like vertex outputs or fragment inputs). | |||
maxColorAttachments | GPUSize32 | maximum | 8 |
The maximum allowed number of color attachments in GPURenderPipelineDescriptor .fragment .targets , GPURenderPassDescriptor .colorAttachments , and GPURenderPassLayout .colorFormats . | |||
maxColorAttachmentBytesPerSample | GPUSize32 | maximum | 32 |
The maximum number of bytes necessary to hold one sample (pixel or subpixel) of render pipeline output data, across all color attachments. | |||
maxComputeWorkgroupStorageSize | GPUSize32 | maximum | 16384 bytes |
The maximum number of bytes of workgroup storage used for a compute stage GPUShaderModule entry-point. | |||
maxComputeInvocationsPerWorkgroup | GPUSize32 | maximum | 256 |
The maximum value of the product of the workgroup_size dimensions for a compute stage GPUShaderModule entry-point. | |||
maxComputeWorkgroupSizeX | GPUSize32 | maximum | 256 |
The maximum value of the workgroup_size X dimension for a compute stage GPUShaderModule entry-point. | |||
maxComputeWorkgroupSizeY | GPUSize32 | maximum | 256 |
The maximum value of the workgroup_size Y dimensions for a compute stage GPUShaderModule entry-point. | |||
maxComputeWorkgroupSizeZ | GPUSize32 | maximum | 64 |
The maximum value of the workgroup_size Z dimensions for a compute stage GPUShaderModule entry-point. | |||
maxComputeWorkgroupsPerDimension | GPUSize32 | maximum | 65535 |
The maximum value for the arguments of dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ) . |
3.6.2.1. GPUSupportedLimits
GPUSupportedLimits
exposes the limits supported by an adapter or device.See GPUAdapter.limits
and GPUDevice.limits
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSupportedLimits {readonly attribute unsigned long ;
maxTextureDimension1D readonly attribute unsigned long ;
maxTextureDimension2D readonly attribute unsigned long ;
maxTextureDimension3D readonly attribute unsigned long ;
maxTextureArrayLayers readonly attribute unsigned long ;
maxBindGroups readonly attribute unsigned long ;
maxBindGroupsPlusVertexBuffers readonly attribute unsigned long ;
maxBindingsPerBindGroup readonly attribute unsigned long ;
maxDynamicUniformBuffersPerPipelineLayout readonly attribute unsigned long ;
maxDynamicStorageBuffersPerPipelineLayout readonly attribute unsigned long ;
maxSampledTexturesPerShaderStage readonly attribute unsigned long ;
maxSamplersPerShaderStage readonly attribute unsigned long ;
maxStorageBuffersPerShaderStage readonly attribute unsigned long ;
maxStorageTexturesPerShaderStage readonly attribute unsigned long ;
maxUniformBuffersPerShaderStage readonly attribute unsigned long long ;
maxUniformBufferBindingSize readonly attribute unsigned long long ;
maxStorageBufferBindingSize readonly attribute unsigned long ;
minUniformBufferOffsetAlignment readonly attribute unsigned long ;
minStorageBufferOffsetAlignment readonly attribute unsigned long ;
maxVertexBuffers readonly attribute unsigned long long ;
maxBufferSize readonly attribute unsigned long ;
maxVertexAttributes readonly attribute unsigned long ;
maxVertexBufferArrayStride readonly attribute unsigned long ;
maxInterStageShaderComponents readonly attribute unsigned long ;
maxInterStageShaderVariables readonly attribute unsigned long ;
maxColorAttachments readonly attribute unsigned long ;
maxColorAttachmentBytesPerSample readonly attribute unsigned long ;
maxComputeWorkgroupStorageSize readonly attribute unsigned long ;
maxComputeInvocationsPerWorkgroup readonly attribute unsigned long ;
maxComputeWorkgroupSizeX readonly attribute unsigned long ;
maxComputeWorkgroupSizeY readonly attribute unsigned long ;
maxComputeWorkgroupSizeZ readonly attribute unsigned long ;};
maxComputeWorkgroupsPerDimension
3.6.2.2. GPUSupportedFeatures
GPUSupportedFeatures
is a setlike interface. Its set entries arethe GPUFeatureName
values of the features supported by an adapter ordevice. It must only contain strings from the GPUFeatureName
enum.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSupportedFeatures {readonly setlike <DOMString >;};
NOTE:
The type of the GPUSupportedFeatures
set entries is DOMString
to allow user agents to gracefully handle valid GPUFeatureName
s which are added in later revisions of the spec but which the user agent has not been updated to recognize yet. If the set entries type was GPUFeatureName
the following code would throw an TypeError
rather than reporting false
:
Check for support of an unrecognized feature:
if ( adapter. features. has( 'unknown-feature' )) { // Use unknown-feature } else { console. warn( 'unknown-feature is not supported by this adapter.' ); }
3.6.2.3. WGSLLanguageFeatures
WGSLLanguageFeatures
is the setlike interface of navigator.gpu.
.Its set entries are the string names of the WGSL language extensions supported by the implementation (regardless of the adapter or device).wgslLanguageFeatures
[Exposed =(Window ,Worker ),SecureContext ]interface WGSLLanguageFeatures {readonly setlike <DOMString >;};
3.6.2.4. GPUAdapterInfo
GPUAdapterInfo
exposes various identifying information about an adapter.
None of the members in GPUAdapterInfo
are guaranteed to be populated. It is at the useragent’s discretion which values to reveal, and it is likely that on some devices none of the valueswill be populated. As such, applications must be able to handle any possible GPUAdapterInfo
values,including the absence of those values.
For privacy considerations, see § 2.2.6 Adapter Identifiers.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUAdapterInfo {readonly attribute DOMString vendor;readonly attribute DOMString architecture;readonly attribute DOMString device;readonly attribute DOMString description;};
GPUAdapterInfo
has the following attributes:
vendor
, of type DOMString, readonly-
The name of the vendor of the adapter, if available. Empty string otherwise.
architecture
, of type DOMString, readonly-
The name of the family or class of GPUs the adapter belongs to, if available. Emptystring otherwise.
device
, of type DOMString, readonly-
A vendor-specific identifier for the adapter, if available. Empty string otherwise.
Note: This is a value that represents the type of adapter. For example, it may be a PCI device ID. It does not uniquely identify a given piece ofhardware like a serial number.
description
, of type DOMString, readonly-
A human readable string describing the adapter as reported by the driver, if available.Empty string otherwise.
Note: Because no formatting is applied to
description
attempting to parsethis value is not recommended. Applications which change their behavior based on theGPUAdapterInfo
, such as applying workarounds for known driver issues, should rely on theother fields when possible.
To create a new adapter info for a given adapter adapter, run the following steps:
-
Let adapterInfo be a new
GPUAdapterInfo
. -
If the vendor is known, set adapterInfo.
vendor
to the name of adapter’s vendor as a normalized identifier string. To preserve privacy, the useragent may instead set adapterInfo.vendor
to the empty string or areasonable approximation of the vendor as a normalized identifier string. -
If |the architecture is known, set adapterInfo.
architecture
to a normalized identifier string representing the family or class of adapters to which adapter belongs. To preserve privacy, the user agent may instead set adapterInfo.architecture
to the empty string or a reasonableapproximation of the architecture as a normalized identifier string. -
If the device is known, set adapterInfo.
device
to a normalized identifier string representing a vendor-specific identifier for adapter.To preserve privacy, the user agent may instead set adapterInfo.device
to to the empty string or a reasonable approximation of a vendor-specific identifier as a normalized identifier string. -
If a description is known, set adapterInfo.
description
to a descriptionof the adapter as reported by the driver. To preserve privacy, the user agent mayinstead set adapterInfo.description
to the empty string or areasonable approximation of a description. -
Return adapterInfo.
A normalized identifier string is one that follows the following pattern:
[a-z0-9]+(-[a-z0-9]+)*
Examples of valid normalized identifier strings include:
-
gpu
-
3d
-
0x3b2f
-
next-gen
-
series-x20-ultra
3.7. Extension Documents
"Extension Documents" are additional documents which describe new functionality which isnon-normative and not part of the WebGPU/WGSL specifications.They describe functionality that builds upon these specifications, often including one or more newAPI feature flags and/or WGSL enable
directives, or interactions with other draftweb specifications.
WebGPU implementations must not expose extension functionality; doing so is a spec violation.New functionality does not become part of the WebGPU standard until it is integratedinto the WebGPU specification (this document) and/or WGSL specification.
3.8. Origin Restrictions
WebGPU allows accessing image data stored in images, videos, and canvases.Restrictions are imposed on the use of cross-domain media, because shaders can be used toindirectly deduce the contents of textures which have been uploaded to the GPU.
WebGPU disallows uploading an image source if it is not origin-clean.
This also implies that the origin-clean flag for acanvas rendered using WebGPU will never be set to false
.
For more information on issuing CORS requests for image and video elements, consult:
3.9. Task Sources
3.9.1. WebGPU Task Source
WebGPU defines a new task source called the WebGPU task source.It is used for the uncapturederror
event and GPUDevice
.lost
.
To queue a global task for GPUDevice
device, with a series of steps steps:
-
Queue a global task on the WebGPU task source, with the global object that was usedto create device, and the steps steps.
3.9.2. Automatic Expiry Task Source
WebGPU defines a new task source called the automatic expiry task source.It is used for the automatic, timed expiry (destruction) of certain objects:
-
GPUTexture
s returned bygetCurrentTexture()
-
GPUExternalTexture
s created fromHTMLVideoElement
s
To queue an automatic expiry task with GPUDevice
device and a series of steps steps:
-
Queue a global task on the automatic expiry task source, with the global object thatwas used to create device, and the steps steps.
Tasks from the automatic expiry task source should be processed with high priority; inparticular, once queued, they should run before user-defined (JavaScript) tasks.
NOTE:
This behavior is more predictable, and the strictness helps developers write more portable applications by eagerly detecting incorrect assumptions about implicit lifetimes that may be hard to detect. Developers are still strongly encouraged to test in multiple implementations.
Implementation note: It is valid to implement a high-priority expiry "task" by instead inserting additional steps at a fixed point inside the event loop processing model rather than running an actual task.
3.10. Color Spaces and Encoding
WebGPU does not provide color management. All values within WebGPU (such as texture elements)are raw numeric values, not color-managed color values.
WebGPU does interface with color-managed outputs (via GPUCanvasConfiguration
) and inputs(via copyExternalImageToTexture()
and importExternalTexture()
).Thus, color conversion must be performed between the WebGPU numeric values and the external color values.Each such interface point locally defines an encoding (color space, transfer function, and alphapremultiplication) in which the WebGPU numeric values are to be interpreted.
WebGPU allows all of the color spaces in the PredefinedColorSpace
enum.Note, each color space is defined over an extended range, as defined by the referenced CSS definitions,to represent color values outside of its space (in both chrominance and luminance).
An out-of-gamut premultiplied RGBA value is one where any of the R/G/B channel valuesexceeds the alpha channel value. For example, the premultiplied sRGB RGBA value [1.0, 0, 0, 0.5]represents the (unpremultiplied) color [2, 0, 0] with 50% alpha, written rgb(srgb 2 0 0 / 50%)
in CSS.Just like any color value outside the sRGB color gamut, this is a well defined point in the extended color space(except when alpha is 0, in which case there is no color).However, when such values are output to a visible canvas, the result is undefined(see GPUCanvasAlphaMode
"premultiplied"
).
3.10.1. Color Space Conversions
A color is converted between spaces by translating its representation in one space to arepresentation in another according to the definitions above.
If the source value has fewer than 4 RGBA channels, the missing green/blue/alpha channels are set to 0, 0, 1
, respectively, before converting for color space/encoding and alpha premultiplication.After conversion, if the destination needs fewer than 4 channels, the additional channelsare ignored.
Note: Grayscale images generally represent RGB values (V, V, V)
, or RGBA values (V, V, V, A)
in their color space.
Colors are not lossily clamped during conversion: converting from one color space to anotherwill result in values outside the range [0, 1] if the source color values were outside the rangeof the destination color space’s gamut. For an sRGB destination, for example, this can occur if thesource is rgba16float, in a wider color space like Display-P3, or is premultiplied and contains out-of-gamut values.
Similarly, if the source value has a high bit depth (e.g. PNG with 16 bits per component) orextended range (e.g. canvas with float16
storage), these colors are preserved through color spaceconversion, with intermediate computations having at least the precision of the source.
3.10.2. Color Space Conversion Elision
If the source and destination of a color space/encoding conversion are the same, then conversionis not necessary. In general, if any given step of the conversion is an identity function (no-op),implementations should elide it, for performance.
For optimal performance, applications should set their color space and encodingoptions so that the number of necessary conversions is minimized throughout the process.For various image sources of GPUImageCopyExternalImage
:
-
-
Premultiplication is controlled via
premultiplyAlpha
. -
Color space is controlled via
colorSpaceConversion
.
-
-
2d canvas:
-
Color space is controlled via the
colorSpace
context creation attribute.
-
WebGL canvas:
-
Premultiplication is controlled via the
premultipliedAlpha
option inWebGLContextAttributes
. -
Color space is controlled via the
WebGLRenderingContext
'sdrawingBufferColorSpace
state.
-
Note: Check browser implementation support for these features before relying on them.
3.11. Numeric conversions from JavaScript to WGSL
Several parts of the WebGPU API (pipeline-overridable constants
andrender pass clear values) take numeric values from WebIDL (double
or float
) and convertthem to WGSL values (bool
, i32
, u32
, f32
, f16
).
To convert an IDL value idlValue of type double
or float
to WGSL type T, possibly throwing a TypeError
:
Note: This TypeError
is generated in the device timeline and never surfaced to JavaScript.
-
Assert idlValue is a finite value, since it is not
unrestricted double
orunrestricted float
. -
Let v be the ECMAScript Number resulting from ! converting idlValue to an ECMAScript value.
-
- If T is
bool
-
Return the WGSL
bool
value corresponding to the result of ! converting v to an IDL value of typeboolean
.Note: This algorithm is called after the conversion from an ECMAScript value to an IDL
double
orfloat
value. If the original ECMAScript value was a non-numeric,non-boolean value like[]
or{}
, then the WGSLbool
result may be differentthan if the ECMAScript value had been converted to IDLboolean
directly. - If T is
i32
-
Return the WGSL
i32
value corresponding to the result of ? converting v to an IDL value of type [EnforceRange
]long
. - If T is
u32
-
Return the WGSL
u32
value corresponding to the result of ? converting v to an IDL value of type [EnforceRange
]unsigned long
. - If T is
f32
-
Return the WGSL
f32
value corresponding to the result of ? converting v to an IDL value of typefloat
. - If T is
f16
-
-
Let wgslF32 be the WGSL
f32
value corresponding to the result of ? converting v to an IDL value of typefloat
. -
Return
f16(wgslF32)
, the result of ! converting the WGSLf32
valuetof16
as defined in WGSL floating point conversion.
Note: As long as the value is in-range of
f32
, no error is thrown, even if thevalue is out-of-range off16
. -
- If T is
To convert a GPUColor
color to a texel value of texture format format, possibly throwing a TypeError
:
Note: This TypeError
is generated in the device timeline and never surfaced to JavaScript.
-
If the components of format (assert they all have the same type) are:
- floating-point types or normalized types
-
Let T be
f32
. - signed integer types
-
Let T be
i32
. - unsigned integer types
-
Let T be
u32
.
-
Let wgslColor be a WGSL value of type
vec4<T>
, where the 4components are the RGBA channels of color, each ? converted to WGSL type T. -
Convert wgslColor to format using the same conversion rules as the § 23.3.7 Output Merging step, and return the result.
Note: For non-integer types, the exact choice of value is implementation-defined.For normalized types, the value is clamped to the range of the type.
Note: In other words, the value written will be as if it was written by a WGSL shader that outputs the value represented as a vec4
of f32
, i32
, or u32
.
4. Initialization
4.1. navigator.gpu
A GPU
object is available in the Window
and WorkerGlobalScope
contexts through the Navigator
and WorkerNavigator
interfaces respectively and is exposed via navigator.gpu
:
interface mixin { [
NavigatorGPU SameObject ,SecureContext ]readonly attribute GPU gpu;};Navigator includes NavigatorGPU;WorkerNavigator includes NavigatorGPU;
NavigatorGPU
has the following attributes:
gpu
, of type GPU, readonly-
A global singleton providing top-level entry points like
requestAdapter()
.
4.2. GPU
GPU
is the entry point to WebGPU.
[Exposed =(Window ,Worker ),SecureContext ]interface GPU {Promise <GPUAdapter?> requestAdapter(optional GPURequestAdapterOptions options = {}); GPUTextureFormat getPreferredCanvasFormat(); [SameObject ]readonly attribute WGSLLanguageFeatures wgslLanguageFeatures;};
GPU
has the following methods and attributes:
requestAdapter(options)
-
Requests an adapter from the user agent.The user agent chooses whether to return an adapter, and, if so,chooses according to the provided options.
Called on:
GPU
this.Arguments:
Arguments for the GPU.requestAdapter(options) method. Parameter Type Nullable Optional Description options
GPURequestAdapterOptions
✘ ✔ Criteria used to select the adapter. Returns:
Promise
<GPUAdapter
?>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:
-
Let adapter be
null
. -
If the user agent chooses to return an adapter, it should:
-
Set adapter to a valid adapter, chosen according tothe rules in § 4.2.2 Adapter Selection and the criteria in options,adhering to § 4.2.1 Adapter Capability Guarantees.
The supported limits of the adapter must adhere to the requirementsdefined in § 3.6.2 Limits.
-
If adapter meets the criteria of a fallback adapter set adapter.
[[fallback]]
totrue
.
-
-
Issue the subsequent steps on contentTimeline.
-
getPreferredCanvasFormat()
-
Returns an optimal
GPUTextureFormat
for displaying 8-bit depth, standard dynamic rangecontent on this system. Must only return"rgba8unorm"
or"bgra8unorm"
.The returned value can be passed as the
format
toconfigure()
calls on aGPUCanvasContext
to ensure the associatedcanvas is able to display its contents efficiently.Note: Canvases which are not displayed to the screen may or may not benefit from using thisformat.
Called on:
GPU
this.Returns:
GPUTextureFormat
Content timeline steps:
-
Return either
"rgba8unorm"
or"bgra8unorm"
, depending on which format is optimal fordisplaying WebGPU canvases on this system.
-
wgslLanguageFeatures
, of type WGSLLanguageFeatures, readonly-
The names of supported WGSL language extensions.Supported language extensions are automatically enabled.
Adapters may become invalid ("expire") at any time.Upon any change in the system’s state that could affect the result of any requestAdapter()
call, the user agent should expire all previously-returned adapters. For example:
-
A physical adapter is added/removed (via plug/unplug, driver update, hang recovery, etc.)
-
The system’s power configuration has changed (laptop unplugged, power settings changed, etc.)
Note: User agents may choose to expire adapters often, even when there has been no systemstate change (e.g. seconds or minutes after the adapter was created).This can help obfuscate real system state changes, and make developers more aware that calling requestAdapter()
again is always necessary before calling requestDevice()
.If an application does encounter this situation, standard device-loss recoveryhandling should allow it to recover.
Requesting a GPUAdapter
with no hints:
const gpuAdapter= await navigator. gpu. requestAdapter();
4.2.1. Adapter Capability Guarantees
Any GPUAdapter
returned by requestAdapter()
must provide the following guarantees:
-
At least one of the following must be true:
-
"texture-compression-bc"
is supported. -
Both
"texture-compression-etc2"
and"texture-compression-astc"
are supported.
-
-
All supported limits must be either the default value or better.
-
All alignment-class limits must be powers of 2.
-
maxBindingsPerBindGroup
must be must be ≥(max bindings per shader stage × max shader stages per pipeline), where:-
max bindings per shader stage is(
maxSampledTexturesPerShaderStage
+maxSamplersPerShaderStage
+maxStorageBuffersPerShaderStage
+maxStorageTexturesPerShaderStage
+maxUniformBuffersPerShaderStage
). -
max shader stages per pipeline is
2
, because aGPURenderPipeline
supports both a vertex and fragment shader.
Note:
maxBindingsPerBindGroup
does not reflect a fundamental limit;implementations should raise it to conform to this requirement, rather than lowering theother limits. -
-
maxBindGroups
must be ≤maxBindGroupsPlusVertexBuffers
. -
maxVertexBuffers
must be ≤maxBindGroupsPlusVertexBuffers
. -
minUniformBufferOffsetAlignment
andminStorageBufferOffsetAlignment
must both be ≥ 32 bytes.Note: 32 bytes would be the alignment of
vec4<f64>
. See WebGPU Shading Language § 13.4.1 Alignment and Size. -
maxUniformBufferBindingSize
must be ≤maxBufferSize
. -
maxStorageBufferBindingSize
must be ≤maxBufferSize
. -
maxStorageBufferBindingSize
must be a multiple of 4 bytes. -
maxVertexBufferArrayStride
must be a multiple of 4 bytes. -
maxComputeWorkgroupSizeX
must be ≤maxComputeInvocationsPerWorkgroup
. -
maxComputeWorkgroupSizeY
must be ≤maxComputeInvocationsPerWorkgroup
. -
maxComputeWorkgroupSizeZ
must be ≤maxComputeInvocationsPerWorkgroup
. -
maxComputeInvocationsPerWorkgroup
must be ≤maxComputeWorkgroupSizeX
×maxComputeWorkgroupSizeY
×maxComputeWorkgroupSizeZ
.
4.2.2. Adapter Selection
GPURequestAdapterOptions
provides hints to the user agent indicating whatconfiguration is suitable for the application.
dictionary GPURequestAdapterOptions { GPUPowerPreference powerPreference;boolean forceFallbackAdapter =false ;};
enum { "low-power", "high-performance",};
GPUPowerPreference
GPURequestAdapterOptions
has the following members:
powerPreference
, of type GPUPowerPreference-
Optionally provides a hint indicating what class of adapter should be selected fromthe system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must notinfluence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system.For instance, some laptops have a low-power integrated GPU and a high-performancediscrete GPU. This hint may also affect the power configuration of the selected GPU tomatch the requested power preference.
Note: Depending on the exact hardware configuration, such as battery status and attached displaysor removable GPUs, the user agent may select different adapters given the same powerpreference.Typically, given the same hardware configuration and state and
powerPreference
, the user agent is likely to select the same adapter.It must be one of the following values:
undefined
(or not present)-
Provides no hint to the user agent.
"low-power"
-
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawingperformance; for example, if it renders only one frame per second, draws only relativelysimple geometry with simple shaders, or uses a small HTML canvas element.Developers are encouraged to use this value if their content allows, since it maysignificantly improve battery life on portable devices.
"high-performance"
-
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on theresulting adapter, user agents are more likely to force device loss, in order to savepower by switching to a lower-power adapter.Developers are encouraged to only specify this value if they believe it is absolutelynecessary, since it may significantly decrease battery life on portable devices.
forceFallbackAdapter
, of type boolean, defaulting tofalse
-
When set to
true
indicates that only a fallback adapter may be returned. If the useragent does not support a fallback adapter, will causerequestAdapter()
toresolve tonull
.Note:
requestAdapter()
may still return a fallback adapter ifforceFallbackAdapter
is set tofalse
and either noother appropriate adapter is available or the user agent chooses to return a fallback adapter. Developers that wish to prevent their applications from running on fallback adapters should check theGPUAdapter
.isFallbackAdapter
attribute prior to requesting aGPUDevice
.
Requesting a "high-performance"
GPUAdapter
:
const gpuAdapter= await navigator. gpu. requestAdapter({ powerPreference: 'high-performance' });
4.3. GPUAdapter
A GPUAdapter
encapsulates an adapter,and describes its capabilities (features and limits).
To get a GPUAdapter
, use requestAdapter()
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUAdapter { [SameObject ]readonly attribute GPUSupportedFeatures features; [SameObject ]readonly attribute GPUSupportedLimits limits;readonly attribute boolean isFallbackAdapter;Promise <GPUDevice> requestDevice(optional GPUDeviceDescriptor descriptor = {});Promise <GPUAdapterInfo> requestAdapterInfo();};
GPUAdapter
has the following attributes:
features
, of type GPUSupportedFeatures, readonly-
The set of values in
this
.[[adapter]]
.[[features]]
. limits
, of type GPUSupportedLimits, readonly-
The limits in
this
.[[adapter]]
.[[limits]]
. isFallbackAdapter
, of type boolean, readonly-
Returns the value of
[[adapter]]
.[[fallback]]
.
GPUAdapter
has the following internal slots:
[[adapter]]
, of type adapter, readonly-
The adapter to which this
GPUAdapter
refers.
GPUAdapter
has the following methods:
requestDevice(descriptor)
-
Requests a device from the adapter.
This is a one-time action: if a device is returned successfully,the adapter becomes invalid.
Called on:
GPUAdapter
this.Arguments:
Arguments for the GPUAdapter.requestDevice(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUDeviceDescriptor
✘ ✔ Description of the GPUDevice
to request.Returns:
Promise
<GPUDevice
>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]
. -
Issue the initialization steps to the Device timeline of this.
-
Return promise.
Device timeline initialization steps:
-
If any of the following requirements are unmet:
-
The set of values in descriptor.
requiredFeatures
must be a subset of those in adapter.[[features]]
.
Then issue the following steps on contentTimeline and return:
Note: This is the same error that is produced if a feature name isn’t knownby the browser at all (in its
GPUFeatureName
definition).This converges the behavior when the browser doesn’t support a featurewith the behavior when a particular adapter doesn’t support a feature. -
-
If any of the following requirements are unmet:
-
Each key in descriptor.
requiredLimits
must be the name of a member of supported limits. -
For each limit name key in the keys of supported limits:Let value be descriptor.
requiredLimits
[key].-
value must be no better than the value of that limit in adapter.
[[limits]]
. -
If the limit’s class is alignment, value must be a power of 2 less than 232.
-
Then issue the following steps on contentTimeline and return:
Content timeline steps:
-
Reject promise with an
OperationError
.
-
-
If adapter is invalid,or the user agent otherwise cannot fulfill the request:
-
Let device be a new device.
-
Lose the device(device,
"unknown"
).Note: This makes adapter invalid, if it wasn’t already.
Note: User agents should consider issuing developer-visible warnings inmost or all cases when this occurs. Applications should performreinitialization logic starting with
requestAdapter()
.
Otherwise:
-
Let device be a new device with the capabilities described by descriptor.
-
Make adapter.
[[adapter]]
invalid.
-
-
Issue the subsequent steps on contentTimeline.
Content timeline steps:
-
Resolve promise with a new
GPUDevice
object device.Note: If the device is already lost because the adapter could not fulfill the request, device.
lost
has already resolved before promise resolves.
-
requestAdapterInfo()
-
Requests the
GPUAdapterInfo
for thisGPUAdapter
.Note: Adapter info values are returned with a Promise to give user agents anopportunity to perform potentially long-running checks in the future.
Called on:
GPUAdapter
this.Returns:
Promise
<GPUAdapterInfo
>Content timeline steps:
-
Let promise be a new promise.
-
Let adapter be this.
[[adapter]]
. -
Run the following steps in parallel:
-
Resolve promise with a new adapter info for adapter.
-
-
Return promise.
-
Requesting a GPUDevice
with default features and limits:
const gpuAdapter= await navigator. gpu. requestAdapter(); const gpuDevice= await gpuAdapter. requestDevice();
4.3.1. GPUDeviceDescriptor
GPUDeviceDescriptor
describes a device request.
dictionary GPUDeviceDescriptor : GPUObjectDescriptorBase {sequence <GPUFeatureName> requiredFeatures = [];record <DOMString , GPUSize64> requiredLimits = {}; GPUQueueDescriptor defaultQueue = {};};
GPUDeviceDescriptor
has the following members:
requiredFeatures
, of type sequence<GPUFeatureName>, defaulting to[]
-
Specifies the features that are required by the device request.The request will fail if the adapter cannot provide these features.
Exactly the specified set of features, and no more or less, will be allowed in validationof API calls on the resulting device.
requiredLimits
, of type record<DOMString, GPUSize64>, defaulting to{}
-
Specifies the limits that are required by the device request.The request will fail if the adapter cannot provide these limits.
Each key must be the name of a member of supported limits.Exactly the specified limits, and no better or worse,will be allowed in validation of API calls on the resulting device.
defaultQueue
, of type GPUQueueDescriptor, defaulting to{}
-
The descriptor for the default
GPUQueue
.
Requesting a GPUDevice
with the "texture-compression-astc"
feature if supported:
const gpuAdapter= await navigator. gpu. requestAdapter(); const requiredFeatures= []; if ( gpuAdapter. features. has( 'texture-compression-astc' )) { requiredFeatures. push( 'texture-compression-astc' ) } const gpuDevice= await gpuAdapter. requestDevice({ requiredFeatures});
Requesting a GPUDevice
with a higher maxColorAttachmentBytesPerSample
limit:
const gpuAdapter= await navigator. gpu. requestAdapter(); if ( gpuAdapter. limits. maxColorAttachmentBytesPerSample< 64 ) { // When the desired limit isn’t supported, take action to either fall back to a code // path that does not require the higher limit or notify the user that their device // does not meet minimum requirements. } // Request higher limit of max color attachments bytes per sample. const gpuDevice= await gpuAdapter. requestDevice({ requiredLimits: { maxColorAttachmentBytesPerSample: 64 }, });
4.3.1.1. GPUFeatureName
Each GPUFeatureName
identifies a set of functionality which, if available,allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUFeatureName { "depth-clip-control", "depth32float-stencil8", "texture-compression-bc", "texture-compression-etc2", "texture-compression-astc", "timestamp-query", "indirect-first-instance", "shader-f16", "rg11b10ufloat-renderable", "bgra8unorm-storage", "float32-filterable",};
4.4. GPUDevice
A GPUDevice
encapsulates a device and exposesthe functionality of that device.
GPUDevice
is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice
, use requestDevice()
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUDevice :EventTarget { [SameObject ]readonly attribute GPUSupportedFeatures features; [SameObject ]readonly attribute GPUSupportedLimits limits; [SameObject ]readonly attribute GPUQueue queue;undefined destroy(); GPUBuffer createBuffer(GPUBufferDescriptor descriptor); GPUTexture createTexture(GPUTextureDescriptor descriptor); GPUSampler createSampler(optional GPUSamplerDescriptor descriptor = {}); GPUExternalTexture importExternalTexture(GPUExternalTextureDescriptor descriptor); GPUBindGroupLayout createBindGroupLayout(GPUBindGroupLayoutDescriptor descriptor); GPUPipelineLayout createPipelineLayout(GPUPipelineLayoutDescriptor descriptor); GPUBindGroup createBindGroup(GPUBindGroupDescriptor descriptor); GPUShaderModule createShaderModule(GPUShaderModuleDescriptor descriptor); GPUComputePipeline createComputePipeline(GPUComputePipelineDescriptor descriptor); GPURenderPipeline createRenderPipeline(GPURenderPipelineDescriptor descriptor);Promise <GPUComputePipeline> createComputePipelineAsync(GPUComputePipelineDescriptor descriptor);Promise <GPURenderPipeline> createRenderPipelineAsync(GPURenderPipelineDescriptor descriptor); GPUCommandEncoder createCommandEncoder(optional GPUCommandEncoderDescriptor descriptor = {}); GPURenderBundleEncoder createRenderBundleEncoder(GPURenderBundleEncoderDescriptor descriptor); GPUQuerySet createQuerySet(GPUQuerySetDescriptor descriptor);};GPUDeviceincludes GPUObjectBase;
GPUDevice
has the following attributes:
features
, of type GPUSupportedFeatures, readonly-
A set containing the
GPUFeatureName
values of the featuressupported by the device (i.e. the ones with which it was created). limits
, of type GPUSupportedLimits, readonly-
Exposes the limits supported by the device(which are exactly the ones with which it was created).
queue
, of type GPUQueue, readonly-
The primary
GPUQueue
for this device.
The [[device]]
for a GPUDevice
is the device that the GPUDevice
refersto.
GPUDevice
has the methods listed in its WebIDL definition above.Those not defined here are defined elsewhere in this document.
destroy()
-
Destroys the device, preventing further operations on it.Outstanding asynchronous operations will fail.
Note: It is valid to destroy a device multiple times.
Called on:
GPUDevice
this.Content timeline steps:
-
unmap()
allGPUBuffer
s from this device. -
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:
-
Once all currently-enqueued operations on any queue on this device are completed, issue the subsequent steps on the current timeline.
-
Lose the device(this.
[[device]]
,"destroyed"
).
Note: Since no further operations can be enqueued on this device, implementations can abortoutstanding asynchronous operations immediately and free resource allocations, includingmapped memory that was just unmapped.
-
A GPUDevice
's allowed buffer usages are:
-
Always allowed:
MAP_READ
,MAP_WRITE
,COPY_SRC
,COPY_DST
,INDEX
,VERTEX
,UNIFORM
,STORAGE
,INDIRECT
,QUERY_RESOLVE
A GPUDevice
's allowed texture usages are:
-
Always allowed:
COPY_SRC
,COPY_DST
,TEXTURE_BINDING
,STORAGE_BINDING
,RENDER_ATTACHMENT
4.5. Example
A more robust example of requesting a GPUAdapter
and GPUDevice
with error handling:
let gpuDevice= null ; async function initializeWebGPU() { // Check to ensure the user agent supports WebGPU. if ( ! ( 'gpu' in navigator)) { console. error( "User agent doesn’t support WebGPU." ); return false ; } // Request an adapter. const gpuAdapter= await navigator. gpu. requestAdapter(); // requestAdapter may resolve with null if no suitable adapters are found. if ( ! gpuAdapter) { console. error( 'No WebGPU adapters found.' ); return false ; } // Request a device. // Note that the promise will reject if invalid options are passed to the optional // dictionary. To avoid the promise rejecting always check any features and limits // against the adapters features and limits prior to calling requestDevice(). gpuDevice= await gpuAdapter. requestDevice(); // requestDevice will never return null, but if a valid device request can’t be // fulfilled for some reason it may resolve to a device which has already been lost. // Additionally, devices can be lost at any time after creation for a variety of reasons // (ie: browser resource management, driver updates), so it’s a good idea to always // handle lost devices gracefully. gpuDevice. lost. then(( info) => { console. error( `WebGPU device was lost: ${ info. message} ` ); gpuDevice= null ; // Many causes for lost devices are transient, so applications should try getting a // new device once a previous one has been lost unless the loss was caused by the // application intentionally destroying the device. Note that any WebGPU resources // created with the previous device (buffers, textures, etc) will need to be // re-created with the new one. if ( info. reason!= 'destroyed' ) { initializeWebGPU(); } }); onWebGPUInitialized(); return true ; } function onWebGPUInitialized() { // Begin creating WebGPU resources here... } initializeWebGPU();
5. Buffers
5.1. GPUBuffer
A GPUBuffer
represents a block of memory that can be used in GPU operations.Data is stored in linear layout, meaning that each byte of the allocation can beaddressed by its offset from the start of the GPUBuffer
, subject to alignmentrestrictions depending on the operation. Some GPUBuffers
can bemapped which makes the block of memory accessible via an ArrayBuffer
calledits mapping.
GPUBuffer
s are created via createBuffer()
.Buffers may be mappedAtCreation
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBuffer {readonly attribute GPUSize64Out size;readonly attribute GPUFlagsConstant usage;readonly attribute GPUBufferMapState mapState;Promise <undefined > mapAsync(GPUMapModeFlags mode,optional GPUSize64 offset = 0,optional GPUSize64 size);ArrayBuffer getMappedRange(optional GPUSize64 offset = 0,optional GPUSize64 size);undefined unmap();undefined destroy();};GPUBufferincludes GPUObjectBase;enum GPUBufferMapState { "unmapped", "pending", "mapped",};
GPUBuffer
has the following immutable properties:
size
, of type GPUSize64Out, readonly-
The length of the
GPUBuffer
allocation in bytes. usage
, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUBuffer
. [[internals]]
, of type buffer internals, readonly,override
GPUBuffer
has the following content timeline properties:
mapState
, of type GPUBufferMapState, readonly-
The current
GPUBufferMapState
of the buffer:"unmapped"
-
The buffer is not mapped for use by
this
.getMappedRange()
. "pending"
-
A mapping of the buffer has been requested, but is pending.It may succeed, or fail validation in
mapAsync()
. "mapped"
-
The buffer is mapped and
this
.getMappedRange()
may be used.
The getter steps are:
Content timeline steps:
-
If this.
[[mapping]]
is notnull
,return"mapped"
. -
If this.
[[pending_map]]
is notnull
,return"pending"
. -
Return
"unmapped"
.
[[pending_map]]
, of typePromise
<void> ornull
, initiallynull
-
The
Promise
returned by the currently-pendingmapAsync()
call.There is never more than one pending map, because
mapAsync()
will refuse immediately if a request is already in flight. [[mapping]]
, of type active buffer mapping ornull
, initiallynull
-
Set if and only if the buffer is currently mapped for use by
getMappedRange()
.Null otherwise (even if there is a[[pending_map]]
).An active buffer mapping is a structure with the following fields:
- data, of type Data Block
-
The mapping for this
GPUBuffer
. This data is accessed throughArrayBuffer
swhich are views onto this data, returned bygetMappedRange()
andstored in views. - mode, of type
GPUMapModeFlags
-
The
GPUMapModeFlags
of the map, as specified in the corresponding call tomapAsync()
orcreateBuffer()
. - range, of type tuple [
unsigned long long
,unsigned long long
] -
The range of this
GPUBuffer
that is mapped. - views, of type list<
ArrayBuffer
> -
The
ArrayBuffer
s returned viagetMappedRange()
to the application.They are tracked so they can be detached whenunmap()
is called.
To initialize an active buffer mapping with mode mode and range range:
-
Let size be range[1] - range[0].
-
Let data be ? CreateByteDataBlock(size).
NOTE:
This may result in a
RangeError
being thrown. For consistency and predictability:-
For any size at which
new ArrayBuffer()
would succeed at a given moment,this allocation should succeed at that moment. -
For any size at which
new ArrayBuffer()
deterministically throws aRangeError
, this allocation should as well.
-
-
Return an active buffer mapping with:
-
data set to data.
-
mode set to mode.
-
range set to range.
-
views set to
[]
.
-
GPUBuffer
's internal object is buffer internals, whichextends internal object with the following device timeline slots:
- state
-
The current internal state of the buffer:
- "available"
-
The buffer may be used in queue operations (unless it is invalid).
- "unavailable"
-
The buffer may not be used in queue operations due to being mapped.
- "destroyed"
-
The buffer may not be used in any operations due to being
destroy()
ed.
5.1.1. GPUBufferDescriptor
dictionary GPUBufferDescriptor : GPUObjectDescriptorBase {required GPUSize64 size;required GPUBufferUsageFlags usage;boolean mappedAtCreation =false ;};
GPUBufferDescriptor
has the following members:
size
, of type GPUSize64-
The size of the buffer in bytes.
usage
, of type GPUBufferUsageFlags-
The allowed usages for the buffer.
mappedAtCreation
, of type boolean, defaulting tofalse
-
If
true
creates the buffer in an already mapped state, allowinggetMappedRange()
to be called immediately. It is valid to setmappedAtCreation
totrue
even ifusage
does not containMAP_READ
orMAP_WRITE
. This can beused to set the buffer’s initial data.Guarantees that even if the buffer creation eventually fails, it will still appear as if themapped range can be written/read to until it is unmapped.
5.1.2. Buffer Usages
typedef [EnforceRange ]unsigned long ;[
GPUBufferUsageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {
GPUBufferUsage const GPUFlagsConstant MAP_READ = 0x0001;const GPUFlagsConstant MAP_WRITE = 0x0002;const GPUFlagsConstant COPY_SRC = 0x0004;const GPUFlagsConstant COPY_DST = 0x0008;const GPUFlagsConstant INDEX = 0x0010;const GPUFlagsConstant VERTEX = 0x0020;const GPUFlagsConstant UNIFORM = 0x0040;const GPUFlagsConstant STORAGE = 0x0080;const GPUFlagsConstant INDIRECT = 0x0100;const GPUFlagsConstant QUERY_RESOLVE = 0x0200;};
The GPUBufferUsage
flags determine how a GPUBuffer
may be used after its creation:
MAP_READ
-
The buffer can be mapped for reading. (Example: calling
mapAsync()
withGPUMapMode.READ
)May only be combined with
COPY_DST
. MAP_WRITE
-
The buffer can be mapped for writing. (Example: calling
mapAsync()
withGPUMapMode.WRITE
)May only be combined with
COPY_SRC
. COPY_SRC
-
The buffer can be used as the source of a copy operation. (Examples: as the
source
argument of acopyBufferToBuffer()
orcopyBufferToTexture()
call.) COPY_DST
-
The buffer can be used as the destination of a copy or write operation. (Examples: as the
destination
argument of acopyBufferToBuffer()
orcopyTextureToBuffer()
call, or as the target of awriteBuffer()
call.) INDEX
-
The buffer can be used as an index buffer. (Example: passed to
setIndexBuffer()
.) VERTEX
-
The buffer can be used as a vertex buffer. (Example: passed to
setVertexBuffer()
.) UNIFORM
-
The buffer can be used as a uniform buffer. (Example: as a bind group entry for a
GPUBufferBindingLayout
with abuffer
.type
of"uniform"
.) STORAGE
-
The buffer can be used as a storage buffer. (Example: as a bind group entry for a
GPUBufferBindingLayout
with abuffer
.type
of"storage"
or"read-only-storage"
.) INDIRECT
-
The buffer can be used as to store indirect command arguments. (Examples: as the
indirectBuffer
argument of adrawIndirect()
ordispatchWorkgroupsIndirect()
call.) QUERY_RESOLVE
-
The buffer can be used to capture query results. (Example: as the
destination
argument ofaresolveQuerySet()
call.)
5.1.3. Buffer Creation
createBuffer(descriptor)
-
Creates a
GPUBuffer
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBuffer(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBufferDescriptor
✘ ✘ Description of the GPUBuffer
to create.Returns:
GPUBuffer
Content timeline steps:
-
Let [b, bi] be ! create a new WebGPU object(this,
GPUBuffer
, descriptor). -
Set b.
size
to descriptor.size
. -
Set b.
usage
to descriptor.usage
. -
If descriptor.
mappedAtCreation
istrue
:-
Set b.
[[mapping]]
to ? initialize an active buffer mapping with modeWRITE
and range[0, descriptor.
.size
]
-
-
Issue the initialization steps on the Device timeline of this.
-
Return b.
Device timeline initialization steps:
-
If any of the following requirements are unmet, generate a validation error, make bi invalid, and stop.
-
device must be valid.
-
descriptor.
usage
must not be 0. -
descriptor.
usage
must be a subset of device’s allowed buffer usages. -
If descriptor.
usage
containsMAP_READ
:-
descriptor.
usage
must contain no other flagsexceptCOPY_DST
.
-
-
If descriptor.
usage
containsMAP_WRITE
:-
descriptor.
usage
must contain no other flagsexceptCOPY_SRC
.
-
-
If descriptor.
size
must be ≤ device.[[device]]
.[[limits]]
.maxBufferSize
. -
If descriptor.
mappedAtCreation
istrue
:-
descriptor.
size
must be a multiple of 4.
-
-
Note: If buffer creation fails, and descriptor.
mappedAtCreation
isfalse
, any calls tomapAsync()
will reject, so any resources allocated to enable mapping can and may be discarded or recycled.-
If descriptor.
mappedAtCreation
istrue
:-
Set bi.state to "unavailable".
Else:
-
Set bi.state to "available".
-
-
Create a device allocation for bi where each byte is zero.
If the allocation fails without side-effects, generate an out-of-memory error,make bi invalid, and return.
-
Creating a 128 byte uniform buffer that can be written into:
const buffer= gpuDevice. createBuffer({ size: 128 , usage: GPUBufferUsage. UNIFORM| GPUBufferUsage. COPY_DST});
5.1.4. Buffer Destruction
An application that no longer requires a GPUBuffer
can choose to loseaccess to it before garbage collection by calling destroy()
. Destroying a buffer alsounmaps it, freeing any memory allocated for the mapping.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer
once all previously submitted operations using it are complete.
destroy()
-
Destroys the
GPUBuffer
.Note: It is valid to destroy a buffer multiple times.
Called on:
GPUBuffer
this.Returns:
undefined
Content timeline steps:
-
Call this.
unmap()
. -
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Set this.
[[internals]]
.state to"destroyed".
Note: Since no further operations can be enqueued using this buffer, implementations canfree resource allocations, including mapped memory that was just unmapped.
-
5.2. Buffer Mapping
An application can request to map a GPUBuffer
so that they can access itscontent via ArrayBuffer
s that represent part of the GPUBuffer
'sallocations. Mapping a GPUBuffer
is requested asynchronously with mapAsync()
so that the user agent can ensure the GPUfinished using the GPUBuffer
before the application can access its content.A mapped GPUBuffer
cannot be used by the GPU and must be unmapped using unmap()
beforework using it can be submitted to the Queue timeline.
Once the GPUBuffer
is mapped, the application can synchronously ask for accessto ranges of its content with getMappedRange()
.The returned ArrayBuffer
can only be detached by unmap()
(directly, or via GPUBuffer
.destroy()
or GPUDevice
.destroy()
),and cannot be transferred.A TypeError
is thrown by any other operation that attempts to do so.
typedef [EnforceRange ]unsigned long ;[
GPUMapModeFlags Exposed =(Window ,Worker ),SecureContext ]namespace {
GPUMapMode const GPUFlagsConstant READ = 0x0001;const GPUFlagsConstant WRITE = 0x0002;};
The GPUMapMode
flags determine how a GPUBuffer
is mapped when calling mapAsync()
:
READ
-
Only valid with buffers created with the
MAP_READ
usage.Once the buffer is mapped, calls to
getMappedRange()
will return anArrayBuffer
containing the buffer’s current values. Changes to the returnedArrayBuffer
will be discarded afterunmap()
is called. WRITE
-
Only valid with buffers created with the
MAP_WRITE
usage.Once the buffer is mapped, calls to
getMappedRange()
will return anArrayBuffer
containing the buffer’s current values. Changes to the returnedArrayBuffer
will be stored in theGPUBuffer
afterunmap()
is called.Note: Since the
MAP_WRITE
buffer usage may only be combined with theCOPY_SRC
buffer usage, mapping for writing can never return valuesproduced by the GPU, and the returnedArrayBuffer
will only ever contain the defaultinitialized data (zeros) or data written by the webpage during a previous mapping.
mapAsync(mode, offset, size)
-
Maps the given range of the
GPUBuffer
and resolves the returnedPromise
when theGPUBuffer
's content is ready to be accessed withgetMappedRange()
.The resolution of the returned
Promise
only indicates that the buffer has been mapped.It does not guarantee the completion of any other operations visible to the content timeline,and in particular does not imply that any otherPromise
returned fromonSubmittedWorkDone()
ormapAsync()
on otherGPUBuffer
shave resolved.The resolution of the
Promise
returned fromonSubmittedWorkDone()
does imply the completion ofmapAsync()
calls made prior to that call,onGPUBuffer
s last used exclusively on that queue.Called on:
GPUBuffer
this.Arguments:
Arguments for the GPUBuffer.mapAsync(mode, offset, size) method. Parameter Type Nullable Optional Description mode
GPUMapModeFlags
✘ ✘ Whether the buffer should be mapped for reading or writing. offset
GPUSize64
✘ ✔ Offset in bytes into the buffer to the start of the range to map. size
GPUSize64
✘ ✔ Size in bytes of the range to map. Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
If this.
[[pending_map]]
is notnull
: -
Let p be a new
Promise
. -
Set this.
[[pending_map]]
to p. -
Issue the validation steps on the Device timeline of this.
[[device]]
. -
Return p.
Device timeline validation steps:
-
If size is
undefined
:-
Let rangeSize be max(0, this.
size
- offset).
Otherwise:
-
Let rangeSize be size.
-
-
If any of the following conditions are unsatisfied:
-
this is a valid
GPUBuffer
. -
this.
[[internals]]
.state is "available". -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset + rangeSize ≤ this.
size
-
mode contains only bits defined in
GPUMapMode
. -
mode contains exactly one of
READ
orWRITE
. -
If mode contains
READ
then this.usage
must containMAP_READ
. -
If mode contains
WRITE
then this.usage
must containMAP_WRITE
.
Then:
-
Issue the map failure steps on contentTimeline.
-
Generate a validation error.
-
Return.
-
-
Set this.
[[internals]]
.state to "unavailable".Note: Since the buffer is mapped, its contents cannot change between this completion and
unmap()
. -
If this.
[[device]]
is lost, or when it becomes lost:-
Issue the map failure steps on contentTimeline.
Otherwise, at an unspecified point:
-
after the completion of currently-enqueued operations that use this,
-
and no later than the next device timeline operation after the device timeline becomes informed of the completion of all currently-enqueued operations (regardless of whether they use this),
run the following steps:
-
Let internalStateAtCompletion be this.
[[internals]]
.state.Note: If, and only if, at this point the buffer has become "available"again due to an
unmap()
call, then[[pending_map]]
!= p below,so mapping will not succeed in the steps below. -
Let dataForMappedRegion be the contents of this starting at offset offset, for rangeSize bytes.
-
Issue the map success steps on the contentTimeline.
-
Content timeline map success steps:
-
If this.
[[pending_map]]
!= p:Note: The map has been cancelled by
unmap()
.-
Assert p is rejected.
-
Return.
-
-
Assert p is pending.
-
Assert internalStateAtCompletion is "unavailable".
-
Let mapping be initialize an active buffer mapping with mode mode and range
[offset, offset + rangeSize]
.If this allocation fails:
-
Set this.
[[pending_map]]
tonull
,and reject p with aRangeError
. -
Return.
-
-
Set the content of mapping.data to dataForMappedRegion.
-
Set this.
[[mapping]]
to mapping. -
Set this.
[[pending_map]]
tonull
,and resolve p.
Content timeline map failure steps:
-
If this.
[[pending_map]]
!= p:Note: The map has been cancelled by
unmap()
.-
Assert p is already rejected.
-
Return.
-
-
Assert p is still pending.
-
Set this.
[[pending_map]]
tonull
,and reject p with anOperationError
.
-
getMappedRange(offset, size)
-
Returns an
ArrayBuffer
with the contents of theGPUBuffer
in the given mapped range.Called on:
GPUBuffer
this.Arguments:
Arguments for the GPUBuffer.getMappedRange(offset, size) method. Parameter Type Nullable Optional Description offset
GPUSize64
✘ ✔ Offset in bytes into the buffer to return buffer contents from. size
GPUSize64
✘ ✔ Size in bytes of the ArrayBuffer
to return.Returns:
ArrayBuffer
Content timeline steps:
-
If size is missing:
-
Let rangeSize be max(0, this.
size
- offset).
Otherwise, let rangeSize be size.
-
-
If any of the following conditions are unsatisfied, throw an
OperationError
and stop.-
this.
[[mapping]]
is notnull
. -
offset is a multiple of 8.
-
rangeSize is a multiple of 4.
-
offset ≥ this.
[[mapping]]
.range[0]. -
offset + rangeSize ≤ this.
[[mapping]]
.range[1]. -
[offset, offset + rangeSize) does not overlap another range in this.
[[mapping]]
.views.
Note: It is always valid to get mapped ranges of a
GPUBuffer
that ismappedAtCreation
, even if it is invalid, because the Content timeline might not know it is invalid. -
-
Let data be this.
[[mapping]]
.data. -
Let view be ! create an ArrayBuffer of size rangeSize,but with its pointer mutably referencing the content of data at offset(offset -
[[mapping]]
.range[0]).Note: A
RangeError
may not be thrown here, because the data has alreadybeen allocated duringmapAsync()
orcreateBuffer()
. -
Set view.
[[ArrayBufferDetachKey]]
to "WebGPUBufferMapping".Note: This causes a
TypeError
to be thrown if an attempt is made to DetachArrayBuffer, except byunmap()
. -
Append view to this.
[[mapping]]
.views. -
Return view.
Note: User agents should consider issuing a developer-visible warning if
getMappedRange()
succeeds without having checked the status of the map, by waiting formapAsync()
to succeed, querying amapState
of"mapped"
, or waiting for a lateronSubmittedWorkDone()
call to succeed. -
unmap()
-
Unmaps the mapped range of the
GPUBuffer
and makes it’s contents available for use by theGPU again.Called on:
GPUBuffer
this.Returns:
undefined
Content timeline steps:
-
If this.
[[pending_map]]
is notnull
:-
Reject this.
[[pending_map]]
with anAbortError
. -
Set this.
[[pending_map]]
tonull
.
-
-
If this.
[[mapping]]
isnull
:-
Return.
-
-
For each
ArrayBuffer
ab in this.[[mapping]]
.views:-
Perform DetachArrayBuffer(ab, "WebGPUBufferMapping").
-
-
Let bufferUpdate be
null
. -
If this.
[[mapping]]
.mode containsWRITE
:-
Set bufferUpdate to {
data
: this.[[mapping]]
.data,offset
: this.[[mapping]]
.range[0]}.
Note: When a buffer is mapped without the
WRITE
mode, thenunmapped, any local modifications done by the application to the mapped rangesArrayBuffer
are discarded and will not affect the content of later mappings. -
-
Set this.
[[mapping]]
tonull
. -
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
If this.
[[device]]
is invalid, return. -
If bufferUpdate is not
null
:-
Issue the following steps on the Queue timeline of this.
[[device]]
.queue
:Queue timeline steps:
-
Update the contents of this at offset bufferUpdate.
offset
with the data bufferUpdate.data
.
-
-
-
Set this.
[[internals]]
.state to "available".
-
6. Textures and Texture Views
6.1. GPUTexture
Remove this definition: texture
One texture consists of one or more texture subresources,each uniquely identified by a mipmap level and,for 2d
textures only, array layer and aspect.
A texture subresource is a subresource: each can be used in different internal usages within a single usage scope.
Each subresource in a mipmap level is approximately half the size,in each spatial dimension, of the corresponding resource in the lesser level(see logical miplevel-specific texture extent).The subresource in level 0 has the dimensions of the texture itself.These are typically used to represent levels of detail of a texture. GPUSampler
and WGSL provide facilities for selecting and interpolating between levels ofdetail, explicitly or automatically.
A "2d"
texture may be an array of array layers.Each subresource in a layer is the same size as the corresponding resources in other layers.For non-2d textures, all subresources have an array layer index of 0.
Each subresource has an aspect.Color textures have just one aspect: color. Depth-or-stencil format textures may have multiple aspects:a depth aspect,a stencil aspect, or both, and may be used in special ways, such as in depthStencilAttachment
and in "depth"
bindings.
A "3d"
texture may have multiple slices, each being thetwo-dimensional image at a particular z
value in the texture.Slices are not separate subresources.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUTexture { GPUTextureView createView(optional GPUTextureViewDescriptor descriptor = {});undefined destroy();readonly attribute GPUIntegerCoordinateOut width;readonly attribute GPUIntegerCoordinateOut height;readonly attribute GPUIntegerCoordinateOut depthOrArrayLayers;readonly attribute GPUIntegerCoordinateOut mipLevelCount;readonly attribute GPUSize32Out sampleCount;readonly attribute GPUTextureDimension dimension;readonly attribute GPUTextureFormat format;readonly attribute GPUFlagsConstant usage;};GPUTextureincludes GPUObjectBase;
GPUTexture
has the following attributes:
width
, of type GPUIntegerCoordinateOut, readonly-
The width of this
GPUTexture
. height
, of type GPUIntegerCoordinateOut, readonly-
The height of this
GPUTexture
. depthOrArrayLayers
, of type GPUIntegerCoordinateOut, readonly-
The depth or layer count of this
GPUTexture
. mipLevelCount
, of type GPUIntegerCoordinateOut, readonly-
The number of mip levels of this
GPUTexture
. sampleCount
, of type GPUSize32Out, readonly-
The number of sample count of this
GPUTexture
. dimension
, of type GPUTextureDimension, readonly-
The dimension of the set of texel for each of this
GPUTexture
's subresources. format
, of type GPUTextureFormat, readonly-
The format of this
GPUTexture
. usage
, of type GPUFlagsConstant, readonly-
The allowed usages for this
GPUTexture
.
GPUTexture
has the following internal slots:
[[size]]
, of typeGPUExtent3D
-
The size of the texture (same as the
width
,height
, anddepthOrArrayLayers
attributes). [[viewFormats]]
, of type sequence<GPUTextureFormat
>-
The set of
GPUTextureFormat
s that can be usedGPUTextureViewDescriptor
.format
when creating views on thisGPUTexture
. [[destroyed]]
, of typeboolean
, initially false-
If the texture is destroyed, it can no longer be used in any operation,and its underlying memory can be freed.
compute render extent(baseSize, mipLevel)
Arguments:
-
GPUExtent3D
baseSize -
GPUSize32
mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDict
object. -
Set extent.
width
to max(1, baseSize.width ≫ mipLevel). -
Set extent.
height
to max(1, baseSize.height ≫ mipLevel). -
Set extent.
depthOrArrayLayers
to 1. -
Return extent.
The logical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel. It is calculated by this procedure:
Logical miplevel-specific texture extent(descriptor, mipLevel)
Arguments:
-
GPUTextureDescriptor
descriptor -
GPUSize32
mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDict
object. -
If descriptor.
dimension
is:"1d"
-
-
Set extent.
width
to max(1, descriptor.size
.width ≫ mipLevel). -
Set extent.
height
to 1. -
Set extent.
depthOrArrayLayers
to 1.
-
"2d"
-
-
Set extent.
width
to max(1, descriptor.size
.width ≫ mipLevel). -
Set extent.
height
to max(1, descriptor.size
.height ≫ mipLevel). -
Set extent.
depthOrArrayLayers
to descriptor.size
.depthOrArrayLayers.
-
"3d"
-
-
Set extent.
width
to max(1, descriptor.size
.width ≫ mipLevel). -
Set extent.
height
to max(1, descriptor.size
.height ≫ mipLevel). -
Set extent.
depthOrArrayLayers
to max(1, descriptor.size
.depthOrArrayLayers ≫ mipLevel).
-
-
Return extent.
The physical miplevel-specific texture extent of a texture is the size of the texture in texels at a specific miplevel that includes the possible extra paddingto form complete texel blocks in the texture. It is calculated by this procedure:
Physical miplevel-specific texture extent(descriptor, mipLevel)
Arguments:
-
GPUTextureDescriptor
descriptor -
GPUSize32
mipLevel
Returns: GPUExtent3DDict
-
Let extent be a new
GPUExtent3DDict
object. -
Let logicalExtent be logical miplevel-specific texture extent(descriptor, mipLevel).
-
If descriptor.
dimension
is:"1d"
-
-
Set extent.
width
to logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
height
to 1. -
Set extent.
depthOrArrayLayers
to 1.
-
"2d"
-
-
Set extent.
width
to logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
height
to logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayers
to logicalExtent.depthOrArrayLayers.
-
"3d"
-
-
Set extent.
width
to logicalExtent.width rounded up to the nearest multiple of descriptor’s texel block width. -
Set extent.
height
to logicalExtent.height rounded up to the nearest multiple of descriptor’s texel block height. -
Set extent.
depthOrArrayLayers
to logicalExtent.depthOrArrayLayers.
-
-
Return extent.
6.1.1. GPUTextureDescriptor
dictionary GPUTextureDescriptor : GPUObjectDescriptorBase {required GPUExtent3D size; GPUIntegerCoordinate mipLevelCount = 1; GPUSize32 sampleCount = 1; GPUTextureDimension dimension = "2d";required GPUTextureFormat format;required GPUTextureUsageFlags usage;sequence <GPUTextureFormat> viewFormats = [];};
GPUTextureDescriptor
has the following members:
size
, of type GPUExtent3D-
The width, height, and depth or layer count of the texture.
mipLevelCount
, of type GPUIntegerCoordinate, defaulting to1
-
The number of mip levels the texture will contain.
sampleCount
, of type GPUSize32, defaulting to1
-
The sample count of the texture. A
sampleCount
>1
indicatesa multisampled texture. dimension
, of type GPUTextureDimension, defaulting to"2d"
-
Whether the texture is one-dimensional, an array of two-dimensional layers, or three-dimensional.
format
, of type GPUTextureFormat-
The format of the texture.
usage
, of type GPUTextureUsageFlags-
The allowed usages for the texture.
viewFormats
, of type sequence<GPUTextureFormat>, defaulting to[]
-
Specifies what view
format
values will be allowed when callingcreateView()
on this texture (in addition to the texture’s actualformat
).NOTE:
Adding a format to this list may have a significant performance impact, so it is best to avoid adding formats unnecessarily.
The actual performance impact is highly dependent on the target system; developers must test various systems to find out the impact on their particular application. For example, on some systems any texture with a
format
orviewFormats
entry including"rgba8unorm-srgb"
will perform less optimally than a"rgba8unorm"
texture which does not. Similar caveats exist for other formats and pairs of formats on other systems.Formats in this list must be texture view format compatible with the texture format.
Two
GPUTextureFormat
s format and viewFormat are texture view format compatible if:-
format equals viewFormat, or
-
format and viewFormat differ only in whether they are
srgb
formats (have the-srgb
suffix).
-
enum { "1d", "2d", "3d",};
GPUTextureDimension
"1d"
-
Specifies a texture that has one dimension, width.
"2d"
-
Specifies a texture that has a width and height, and may have layers. Only
"2d"
textures may have mipmaps, be multisampled, use a compressed ordepth/stencil format, and be used as a render attachment. "3d"
-
Specifies a texture that has a width, height, and depth.
6.1.2. Texture Usages
typedef [EnforceRange ]unsigned long ;[
GPUTextureUsageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {
GPUTextureUsage const GPUFlagsConstant COPY_SRC = 0x01;const GPUFlagsConstant COPY_DST = 0x02;const GPUFlagsConstant TEXTURE_BINDING = 0x04;const GPUFlagsConstant STORAGE_BINDING = 0x08;const GPUFlagsConstant RENDER_ATTACHMENT = 0x10;};
The GPUTextureUsage
flags determine how a GPUTexture
may be used after its creation:
COPY_SRC
-
The texture can be used as the source of a copy operation. (Examples: as the
source
argument of acopyTextureToTexture()
orcopyTextureToBuffer()
call.) COPY_DST
-
The texture can be used as the destination of a copy or write operation. (Examples: as the
destination
argument of acopyTextureToTexture()
orcopyBufferToTexture()
call, or as the target of awriteTexture()
call.) TEXTURE_BINDING
-
The texture can be bound for use as a sampled texture in a shader (Example: as a bind groupentry for a
GPUTextureBindingLayout
.) STORAGE_BINDING
-
The texture can be bound for use as a storage texture in a shader (Example: as a bind groupentry for a
GPUStorageTextureBindingLayout
.) RENDER_ATTACHMENT
-
The texture can be used as a color or depth/stencil attachment in a render pass.(Example: as a
GPURenderPassColorAttachment
.view
orGPURenderPassDepthStencilAttachment
.view
.)
maximum mipLevel count(dimension, size)
Arguments:
-
dimension
dimension -
size
size
-
Calculate the max dimension value m:
-
If dimension is:
"1d"
-
Return 1.
"2d"
-
Let m = max(size.width, size.height).
"3d"
-
Let m = max(max(size.width, size.height), size.depthOrArrayLayer).
-
-
Return floor(log2(m)) + 1.
6.1.3. Texture Creation
createTexture(descriptor)
-
Creates a
GPUTexture
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createTexture(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUTextureDescriptor
✘ ✘ Description of the GPUTexture
to create.Returns:
GPUTexture
Content timeline steps:
-
? validate GPUExtent3D shape(descriptor.
size
). -
? Validate texture format required features of descriptor.
format
with this.[[device]]
. -
? Validate texture format required features of each element of descriptor.
viewFormats
with this.[[device]]
. -
Let t be a new
GPUTexture
object. -
Set t.
width
to descriptor.size
.width. -
Set t.
height
to descriptor.size
.height. -
Set t.
depthOrArrayLayers
to descriptor.size
.depthOrArrayLayers. -
Set t.
mipLevelCount
to descriptor.mipLevelCount
. -
Set t.
sampleCount
to descriptor.sampleCount
. -
Set t.
dimension
to descriptor.dimension
. -
Set t.
format
to descriptor.format
. -
Set t.
usage
to descriptor.usage
. -
Issue the initialization steps on the Device timeline of this.
-
Return t.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make t invalid, and stop.
-
validating GPUTextureDescriptor(this, descriptor) returns
true
.
-
-
Set t.
[[size]]
to descriptor.size
. -
Set t.
[[viewFormats]]
to descriptor.viewFormats
.
-
validating GPUTextureDescriptor(GPUDevice
this, GPUTextureDescriptor
descriptor):
Return true
if all of the following requirements are met, and false
otherwise:
-
this must be a valid
GPUDevice
. -
descriptor.
usage
must not be 0. -
descriptor.
usage
must contain only bits present in this’s allowed texture usages. -
descriptor.
size
.width, descriptor.size
.height,and descriptor.size
.depthOrArrayLayers must be > zero. -
descriptor.
mipLevelCount
must be > zero. -
descriptor.
sampleCount
must be either 1 or 4. -
If descriptor.
dimension
is:"1d"
-
-
descriptor.
size
.width must be ≤ this.limits
.maxTextureDimension1D
. -
descriptor.
size
.height must be 1. -
descriptor.
size
.depthOrArrayLayers must be 1. -
descriptor.
sampleCount
must be 1. -
descriptor.
format
must not be a compressed format or depth-or-stencil format.
-
"2d"
-
-
descriptor.
size
.width must be ≤ this.limits
.maxTextureDimension2D
. -
descriptor.
size
.height must be ≤ this.limits
.maxTextureDimension2D
. -
descriptor.
size
.depthOrArrayLayers must be ≤ this.limits
.maxTextureArrayLayers
.
-
"3d"
-
-
descriptor.
size
.width must be ≤ this.limits
.maxTextureDimension3D
. -
descriptor.
size
.height must be ≤ this.limits
.maxTextureDimension3D
. -
descriptor.
size
.depthOrArrayLayers must be ≤ this.limits
.maxTextureDimension3D
. -
descriptor.
sampleCount
must be 1. -
descriptor.
format
must not be a compressed format or depth-or-stencil format.
-
-
descriptor.
size
.width must be multiple of texel block width. -
descriptor.
size
.height must be multiple of texel block height. -
If descriptor.
sampleCount
> 1:-
descriptor.
mipLevelCount
must be 1. -
descriptor.
size
.depthOrArrayLayers must be 1. -
descriptor.
usage
must not include theSTORAGE_BINDING
bit. -
descriptor.
usage
must include theRENDER_ATTACHMENT
bit. -
descriptor.
format
must support multisampling according to § 26.1 Texture Format Capabilities.
-
-
descriptor.
mipLevelCount
must be ≤ maximum mipLevel count(descriptor.dimension
, descriptor.size
). -
If descriptor.
usage
includes theRENDER_ATTACHMENT
bit:-
descriptor.
format
must be a renderable format. -
descriptor.
dimension
must be either"2d"
or"3d"
.
-
-
If descriptor.
usage
includes theSTORAGE_BINDING
bit:-
descriptor.
format
must be listed in § 26.1.1 Plain color formats tablewithSTORAGE_BINDING
capability for the appropriate access mode.
-
-
For each viewFormat in descriptor.
viewFormats
, descriptor.format
and viewFormat must be texture view format compatible.
Creating a 16x16, RGBA, 2D texture with one array layer and one mip level:
const texture= gpuDevice. createTexture({ size: { width: 16 , height: 16 }, format: 'rgba8unorm' , usage: GPUTextureUsage. TEXTURE_BINDING, });
6.1.4. Texture Destruction
An application that no longer requires a GPUTexture
can choose to lose access to it beforegarbage collection by calling destroy()
.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUTexture
onceall previously submitted operations using it are complete.
destroy()
-
Destroys the
GPUTexture
.Called on:
GPUTexture
this.Returns:
undefined
Content timeline steps:
-
Set this.
[[destroyed]]
to true.
-
6.2. GPUTextureView
A GPUTextureView
is a view onto some subset of the texture subresources defined bya particular GPUTexture
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUTextureView {};GPUTextureViewincludes GPUObjectBase;
GPUTextureView
has the following internal slots:
[[texture]]
-
The
GPUTexture
into which this is a view. [[descriptor]]
-
The
GPUTextureViewDescriptor
describing this texture view.All optional fields of
GPUTextureViewDescriptor
are defined. [[renderExtent]]
-
For renderable views, this is the effective
GPUExtent3DDict
for rendering.Note: this extent depends on the
baseMipLevel
.
The set of subresources of a texture view view, with [[descriptor]]
desc, is the subset of the subresources of view.[[texture]]
for which each subresource s satisfies the following:
-
The mipmap level of s is ≥ desc.
baseMipLevel
and < desc.baseMipLevel
+ desc.mipLevelCount
. -
The array layer of s is ≥ desc.
baseArrayLayer
and < desc.baseArrayLayer
+ desc.arrayLayerCount
. -
The aspect of s is in the set of aspects of desc.
aspect
.
Two GPUTextureView
objects are texture-view-aliasing if and only if their sets of subresources intersect.
6.2.1. Texture View Creation
dictionary : GPUObjectDescriptorBase { GPUTextureFormat format; GPUTextureViewDimension dimension; GPUTextureAspect aspect = "all"; GPUIntegerCoordinate baseMipLevel = 0; GPUIntegerCoordinate mipLevelCount; GPUIntegerCoordinate baseArrayLayer = 0; GPUIntegerCoordinate arrayLayerCount;};
GPUTextureViewDescriptor
GPUTextureViewDescriptor
has the following members:
format
, of type GPUTextureFormat-
The format of the texture view. Must be either the
format
of thetexture or one of theviewFormats
specified during its creation. dimension
, of type GPUTextureViewDimension-
The dimension to view the texture as.
aspect
, of type GPUTextureAspect, defaulting to"all"
-
Which
aspect(s)
of the texture are accessible to the texture view. baseMipLevel
, of type GPUIntegerCoordinate, defaulting to0
-
The first (most detailed) mipmap level accessible to the texture view.
mipLevelCount
, of type GPUIntegerCoordinate-
How many mipmap levels, starting with
baseMipLevel
, are accessible tothe texture view. baseArrayLayer
, of type GPUIntegerCoordinate, defaulting to0
-
The index of the first array layer accessible to the texture view.
arrayLayerCount
, of type GPUIntegerCoordinate-
How many array layers, starting with
baseArrayLayer
, are accessibleto the texture view.
enum { "1d", "2d", "2d-array", "cube", "cube-array", "3d",};
GPUTextureViewDimension
"1d"
-
The texture is viewed as a 1-dimensional image.
Corresponding WGSL types:
-
texture_1d
-
texture_storage_1d
-
"2d"
-
The texture is viewed as a single 2-dimensional image.
Corresponding WGSL types:
-
texture_2d
-
texture_storage_2d
-
texture_multisampled_2d
-
texture_depth_2d
-
texture_depth_multisampled_2d
-
"2d-array"
-
The texture view is viewed as an array of 2-dimensional images.
Corresponding WGSL types:
-
texture_2d_array
-
texture_storage_2d_array
-
texture_depth_2d_array
-
"cube"
-
The texture is viewed as a cubemap.The view has 6 array layers, corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube.Sampling is done seamlessly across the faces of the cubemap.
Corresponding WGSL types:
-
texture_cube
-
texture_depth_cube
-
"cube-array"
-
The texture is viewed as a packed array of
n
cubemaps,each with 6 array layers corresponding to the [+X, -X, +Y, -Y, +Z, -Z] faces of the cube.Sampling is done seamlessly across the faces of the cubemaps.Corresponding WGSL types:
-
texture_cube_array
-
texture_depth_cube_array
-
"3d"
-
The texture is viewed as a 3-dimensional image.
Corresponding WGSL types:
-
texture_3d
-
texture_storage_3d
-
Each GPUTextureAspect
value corresponds to a set of aspects.The set of aspects are defined for each value below.
enum GPUTextureAspect { "all", "stencil-only", "depth-only",};
"all"
-
All available aspects of the texture format will be accessible to the texture view. Forcolor formats the color aspect will be accessible. For combined depth-stencil formats both the depth and stencil aspects will be accessible. Depth-or-stencil formats with a single aspect will only make that aspect accessible.
The set of aspects is [color, depth, stencil].
"stencil-only"
-
Only the stencil aspect of a depth-or-stencil format format will be accessible to thetexture view.
The set of aspects is [stencil].
"depth-only"
-
Only the depth aspect of a depth-or-stencil format format will be accessible to thetexture view.
The set of aspects is [depth].
createView(descriptor)
-
Creates a
GPUTextureView
.NOTE:
By default
createView()
will create a view with a dimension that can represent the entire texture. For example, callingcreateView()
without specifying adimension
on a"2d"
texture with more than one layer will create a"2d-array"
GPUTextureView
, even if anarrayLayerCount
of 1 is specified.For textures created from sources where the layer count is unknown at the time of development it is recommended that calls to
createView()
are provided with an explicitdimension
to ensure shader compatibility.Called on:
GPUTexture
this.Arguments:
Arguments for the GPUTexture.createView(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUTextureViewDescriptor
✘ ✔ Description of the GPUTextureView
to create.Returns: view, of type
GPUTextureView
.Content timeline steps:
-
? Validate texture format required features of descriptor.
format
with this.[[device]]
. -
Let view be a new
GPUTextureView
object. -
Issue the initialization steps on the Device timeline of this.
-
Return view.
Device timeline initialization steps:
-
Set descriptor to the result of resolving GPUTextureViewDescriptor defaults for this with descriptor.
-
If any of the following conditions are unsatisfied generate a validation error, make view invalid, and stop.
-
this is valid.
-
descriptor.
aspect
must be present in this.format
. -
If the descriptor.
aspect
is"all"
:-
descriptor.
format
must equal either this.format
or one of the formats in this.[[viewFormats]]
.
Otherwise:
-
descriptor.
format
must equal the result of resolving GPUTextureAspect( this.format
, descriptor.aspect
).
-
-
descriptor.
mipLevelCount
must be > 0. -
descriptor.
baseMipLevel
+ descriptor.mipLevelCount
must be ≤ this.mipLevelCount
. -
descriptor.
arrayLayerCount
must be > 0. -
descriptor.
baseArrayLayer
+ descriptor.arrayLayerCount
must be ≤the array layer count of this. -
If this.
sampleCount
> 1, descriptor.dimension
must be"2d"
. -
If descriptor.
dimension
is:"1d"
-
-
this.
dimension
must be"1d"
. -
descriptor.
arrayLayerCount
must be1
.
-
"2d"
-
-
this.
dimension
must be"2d"
. -
descriptor.
arrayLayerCount
must be1
.
-
"2d-array"
-
-
this.
dimension
must be"2d"
.
-
"cube"
-
-
this.
dimension
must be"2d"
. -
descriptor.
arrayLayerCount
must be6
. -
this.
width
must equal this.height
.
-
"cube-array"
-
-
this.
dimension
must be"2d"
. -
descriptor.
arrayLayerCount
must be a multiple of6
. -
this.
width
must equal this.height
.
-
"3d"
-
-
this.
dimension
must be"3d"
. -
descriptor.
arrayLayerCount
must be1
.
-
-
-
Let view be a new
GPUTextureView
object. -
Set view.
[[texture]]
to this. -
Set view.
[[descriptor]]
to descriptor. -
If this.
usage
containsRENDER_ATTACHMENT
:-
Let renderExtent be compute render extent(this.
[[size]]
, descriptor.baseMipLevel
). -
Set view.
[[renderExtent]]
to renderExtent.
-
-
When resolving GPUTextureViewDescriptor defaults for GPUTextureView
texture with a GPUTextureViewDescriptor
descriptor run the following steps:
-
Let resolved be a copy of descriptor.
-
If resolved.
format
is not provided:-
Let format be the result of resolving GPUTextureAspect(
format
, descriptor.aspect
). -
If format is
null
:See AlsoWebGPU API - Web APIs | MDNWhat Is Hardware Acceleration and When Should You Use It?How do you choose between GPU and CPU rendering for your projects?Chrome ships WebGPU | Blog | Chrome for Developers-
Set resolved.
format
to texture.format
.
Otherwise:
-
Set resolved.
format
to format.
-
-
-
If resolved.
mipLevelCount
is not provided:set resolved.mipLevelCount
to texture.mipLevelCount
− resolved.baseMipLevel
. -
If resolved.
dimension
is not provided and texture.dimension
is:"1d"
-
Set resolved.
dimension
to"1d"
. "2d"
-
If the array layer count of texture is 1:
-
Set resolved.
dimension
to"2d"
.
Otherwise:
-
Set resolved.
dimension
to"2d-array"
.
-
"3d"
-
Set resolved.
dimension
to"3d"
.
-
If resolved.
arrayLayerCount
is not provided and resolved.dimension
is:"1d"
,"2d"
, or"3d"
-
Set resolved.
arrayLayerCount
to1
. "cube"
-
Set resolved.
arrayLayerCount
to6
. "2d-array"
or"cube-array"
-
Set resolved.
arrayLayerCount
to the array layer count of texture − resolved.baseArrayLayer
.
-
Return resolved.
To determine the array layer count of GPUTexture
texture, run the following steps:
-
If texture.
dimension
is:"1d"
or"3d"
-
Return
1
. "2d"
-
Return texture.
depthOrArrayLayers
.
6.3. Texture Formats
The name of the format specifies the order of components, bits per component,and data type for the component.
-
r
,g
,b
,a
= red, green, blue, alpha -
unorm
= unsigned normalized -
snorm
= signed normalized -
uint
= unsigned int -
sint
= signed int -
float
= floating point
If the format has the -srgb
suffix, then sRGB conversions from gamma to linearand vice versa are applied during the reading and writing of color values in theshader. Compressed texture formats are provided by features. Their namingshould follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm
.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormat
s,and a single compressed block of the textures in block-based compressed GPUTextureFormat
s.
The texel block width and texel block height specifies the dimension of one texel block.
-
For pixel-based
GPUTextureFormat
s, the texel block width and texel block height are always 1. -
For block-based compressed
GPUTextureFormat
s, the texel block width is the number of texels in each row of one texel block,and the texel block height is the number of texel rows in one texel block. See § 26.1 Texture Format Capabilities for an exhaustive listof values for every texture format.
The texel block copy footprint of an aspect of a GPUTextureFormat
is the number ofbytes one texel block occupies during an image copy, if applicable.
Note: The texel block memory cost of a GPUTextureFormat
is the number ofbytes needed to store one texel block. It is not fully defined for all formats. This value is informative and non-normative.
enum { // 8-bit formats
GPUTextureFormat ,
"r8unorm" ,
"r8snorm" ,
"r8uint" , // 16-bit formats
"r8sint" ,
"r16uint" ,
"r16sint" ,
"r16float" ,
"rg8unorm" ,
"rg8snorm" ,
"rg8uint" , // 32-bit formats
"rg8sint" ,
"r32uint" ,
"r32sint" ,
"r32float" ,
"rg16uint" ,
"rg16sint" ,
"rg16float" ,
"rgba8unorm" ,
"rgba8unorm-srgb" ,
"rgba8snorm" ,
"rgba8uint" ,
"rgba8sint" ,
"bgra8unorm" , // Packed 32-bit formats
"bgra8unorm-srgb" ,
"rgb9e5ufloat" ,
"rgb10a2uint" ,
"rgb10a2unorm" , // 64-bit formats
"rg11b10ufloat" ,
"rg32uint" ,
"rg32sint" ,
"rg32float" ,
"rgba16uint" ,
"rgba16sint" , // 128-bit formats
"rgba16float" ,
"rgba32uint" ,
"rgba32sint" , // Depth/stencil formats
"rgba32float" ,
"stencil8" ,
"depth16unorm" ,
"depth24plus" ,
"depth24plus-stencil8" , // "depth32float-stencil8" feature
"depth32float" , // BC compressed formats usable if "texture-compression-bc" is both // supported by the device/user agent and enabled in requestDevice.
"depth32float-stencil8" ,
"bc1-rgba-unorm" ,
"bc1-rgba-unorm-srgb" ,
"bc2-rgba-unorm" ,
"bc2-rgba-unorm-srgb" ,
"bc3-rgba-unorm" ,
"bc3-rgba-unorm-srgb" ,
"bc4-r-unorm" ,
"bc4-r-snorm" ,
"bc5-rg-unorm" ,
"bc5-rg-snorm" ,
"bc6h-rgb-ufloat" ,
"bc6h-rgb-float" ,
"bc7-rgba-unorm" , // ETC2 compressed formats usable if "texture-compression-etc2" is both // supported by the device/user agent and enabled in requestDevice.
"bc7-rgba-unorm-srgb" ,
"etc2-rgb8unorm" ,
"etc2-rgb8unorm-srgb" ,
"etc2-rgb8a1unorm" ,
"etc2-rgb8a1unorm-srgb" ,
"etc2-rgba8unorm" ,
"etc2-rgba8unorm-srgb" ,
"eac-r11unorm" ,
"eac-r11snorm" ,
"eac-rg11unorm" , // ASTC compressed formats usable if "texture-compression-astc" is both // supported by the device/user agent and enabled in requestDevice.
"eac-rg11snorm" ,
"astc-4x4-unorm" ,
"astc-4x4-unorm-srgb" ,
"astc-5x4-unorm" ,
"astc-5x4-unorm-srgb" ,
"astc-5x5-unorm" ,
"astc-5x5-unorm-srgb" ,
"astc-6x5-unorm" ,
"astc-6x5-unorm-srgb" ,
"astc-6x6-unorm" ,
"astc-6x6-unorm-srgb" ,
"astc-8x5-unorm" ,
"astc-8x5-unorm-srgb" ,
"astc-8x6-unorm" ,
"astc-8x6-unorm-srgb" ,
"astc-8x8-unorm" ,
"astc-8x8-unorm-srgb" ,
"astc-10x5-unorm" ,
"astc-10x5-unorm-srgb" ,
"astc-10x6-unorm" ,
"astc-10x6-unorm-srgb" ,
"astc-10x8-unorm" ,
"astc-10x8-unorm-srgb" ,
"astc-10x10-unorm" ,
"astc-10x10-unorm-srgb" ,
"astc-12x10-unorm" ,
"astc-12x10-unorm-srgb" ,
"astc-12x12-unorm" ,};
"astc-12x12-unorm-srgb"
The depth component of the "depth24plus"
and "depth24plus-stencil8"
formats may be implemented as either a 24-bit depth value or a "depth32float"
value.
The stencil8
format may be implemented aseither a real "stencil8", or "depth24stencil8", where the depth aspect ishidden and inaccessible.
NOTE:
While the precision of depth32float channels is strictly higher than the precision of 24-bit depth channels for all values in the representable range (0.0 to 1.0), note that the set of representable values is not an exact superset.
-
For 24-bit depth, 1 ULP has a constant value of 1 / (224 − 1).
-
For depth32float, 1 ULP has a variable value no greater than 1 / (224).
A format is renderable if it is either a color renderable format, or a depth-or-stencil format.If a format is listed in § 26.1.1 Plain color formats with RENDER_ATTACHMENT
capability, it is acolor renderable format. Any other format is not a color renderable format.All depth-or-stencil formats are renderable.
A renderable format is also blendable if it can be used with render pipeline blending.See § 26.1 Texture Format Capabilities.
A format is filterable if it supports the GPUTextureSampleType
"float"
(not just "unfilterable-float"
);that is, it can be used with "filtering"
GPUSampler
s.See § 26.1 Texture Format Capabilities.
resolving GPUTextureAspect(format, aspect)
Arguments:
-
GPUTextureFormat
format -
GPUTextureAspect
aspect
Returns: GPUTextureFormat
or null
-
If aspect is:
"all"
-
Return format.
"depth-only"
"stencil-only"
-
If format is a depth-stencil-format:Return the aspect-specific format of format according to § 26.1.2 Depth-stencil formats or
null
if the aspect is not present in format.
-
Return
null
.
Use of some texture formats require a feature to be enabled on the GPUDevice
. Because newformats can be added to the specification, those enum values may not be known by the implementation.In order to normalize behavior across implementations, attempting to use a format that requires afeature will throw an exception if the associated feature is not enabled on the device. This makesthe behavior the same as when the format is unknown to the implementation.
See § 26.1 Texture Format Capabilities for information about which GPUTextureFormat
s require features.
Validate texture format required features of a GPUTextureFormat
format with logical device device by running the following steps:
6.4. GPUExternalTexture
A GPUExternalTexture
is a sampleable 2D texture wrapping an external video object.The contents of a GPUExternalTexture
object are a snapshot and may not change, either from inside WebGPU(it is only sampleable) or from outside WebGPU (e.g. due to video frame advancement).
They are bound into bind group layouts using the externalTexture
bind group layout entry member.External textures use several binding slots: see Exceeds the binding slot limits.
NOTE:
External textures can be implemented without creating a copy of the imported source, but this depends implementation-defined factors. Ownership of the underlying representation may either be exclusive or shared with other owners (such as a video decoder), but this is not visible to the application.
The underlying representation of an external texture is unobservable (except for sampling behavior) but typically may include
-
Up to three 2D planes of data (e.g. RGBA, Y+UV, Y+U+V).
-
Metadata for converting coordinates before reading from those planes (crop and rotation).
-
Metadata for converting values into the specified output color space (matrices, gammas, 3D LUT).
The configuration used may not be stable across time, systems, user agents, media sources, or frames within a single video source. In order to account for many possible representations, the binding conservatively uses the following, for each external texture:
-
three sampled texture bindings (for up to 3 planes),
-
one sampled texture binding for a 3D LUT,
-
one sampler binding to sample the 3D LUT, and
-
one uniform buffer binding for metadata.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUExternalTexture {};GPUExternalTextureincludes GPUObjectBase;
GPUExternalTexture
has the following internal slots:
[[expired]]
, of typeboolean
-
Indicates whether the object has expired (can no longer be used).Initially set to
false
.Note: Unlike similar
\[[destroyed]]
slots, this can change fromtrue
back tofalse
. [[descriptor]]
, of typeGPUExternalTextureDescriptor
-
The descriptor with which the texture was created.
6.4.1. Importing External Textures
An external texture is created from an external video objectusing importExternalTexture()
.
An external texture created from an HTMLVideoElement
expires (is destroyed) automatically in atask after it is imported, instead of manually or upon garbage collection like other resources.When an external texture expires, its [[expired]]
slot changes to true
.
An external texture created from a VideoFrame
expires (is destroyed) when, and only when,the source VideoFrame
is closed,either explicitly by close()
, or by other means.
Note: As noted in decode()
, authors should call close()
on output VideoFrame
s to avoid decoder stalls.If an imported VideoFrame
is dropped without being closed, the imported GPUExternalTexture
object will keep it alive until it is also dropped.The VideoFrame
cannot be garbage collected until both objects are dropped.Garbage collection is unpredictable, so this may still stall the video decoder.
Once the GPUExternalTexture
expires, importExternalTexture()
must be called again.However, the user agent may un-expire and return the same GPUExternalTexture
again, instead ofcreating a new one. This will commonly happen unless the execution of the application is scheduledto match the video’s frame rate (e.g. using requestVideoFrameCallback()
).If the same object is returned again, it will compare equal, and GPUBindGroup
s, GPURenderBundle
s, etc. referencing the previous object can still be used.
dictionary : GPUObjectDescriptorBase {
GPUExternalTextureDescriptor required (HTMLVideoElement or VideoFrame );
source PredefinedColorSpace = "srgb";};
colorSpace
importExternalTexture(descriptor)
-
Creates a
GPUExternalTexture
wrapping the provided image source.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.importExternalTexture(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUExternalTextureDescriptor
✘ ✘ Provides the external image source object (and any creation options). Returns:
GPUExternalTexture
Content timeline steps:
-
Let source be descriptor.
source
. -
If the current image contents of source are the same as the most recent
importExternalTexture()
call with the same descriptor (ignoringlabel
),and the user agent chooses to reuse it:-
Let previousResult be the
GPUExternalTexture
returned previously. -
Set previousResult.
[[expired]]
tofalse
,renewing ownership of the underlying resource. -
Let result be previousResult.
Note: This allows the application to detect duplicate imports and avoid re-creatingdependent objects (such as
GPUBindGroup
s).Implementations still need to be able to handle a single frame being wrapped bymultipleGPUExternalTexture
, since import metadata likecolorSpace
can change even for the same frame.Otherwise:
-
If source is not origin-clean,throw a
SecurityError
and stop. -
Let usability be ? check the usability of the image argument(source).
-
If usability is not
good
:-
Generate a validation error.
-
Return an invalid
GPUExternalTexture
.
-
-
Let data be the result of converting the current image contents of source intothe color space descriptor.
colorSpace
with unpremultiplied alpha.This may result in values outside of the range [0, 1].If clamping is desired, it may be performed after sampling.
Note: This is described like a copy, but may be implemented as a reference toread-only underlying data plus appropriate metadata to perform conversion later.
-
Let result be a new
GPUExternalTexture
object wrapping data.
-
-
If source is an
HTMLVideoElement
, queue an automatic expiry task with device this and the following steps:-
Set result.
[[expired]]
totrue
,releasing ownership of the underlying resource.
Note: An
HTMLVideoElement
should be imported in the same task that samples the texture(which should generally be scheduled usingrequestVideoFrameCallback
orrequestAnimationFrame()
depending on the application).Otherwise, a texture could get destroyed by these steps before theapplication is finished using it. -
-
If source is a
VideoFrame
, then when source is closed, run the following steps:-
Set result.
[[expired]]
totrue
.
-
-
Set result.
label
to descriptor.label
. -
Return result.
-
Rendering using an video element external texture at the page animation frame rate:
const videoElement= document. createElement( 'video' ); // ... set up videoElement, wait for it to be ready... function frame() { requestAnimationFrame( frame); // Always re-import the video on every animation frame, because the // import is likely to have expired. // The browser may cache and reuse a past frame, and if it does it // may return the same GPUExternalTexture object again. // In this case, old bind groups are still valid. const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } requestAnimationFrame( frame);
Rendering using an video element external texture at the video’s frame rate, if requestVideoFrameCallback
is available:
const videoElement= document. createElement( 'video' ); // ... set up videoElement... function frame() { videoElement. requestVideoFrameCallback( frame); // Always re-import, because we know the video frame has advanced const externalTexture= gpuDevice. importExternalTexture({ source: videoElement}); // ... render using externalTexture... } videoElement. requestVideoFrameCallback( frame);
6.4.2. Sampling External Textures
External textures are represented in WGSL with texture_external
and may be read using textureLoad
and textureSampleBaseClampToEdge
.
The sampler
provided to textureSampleBaseClampToEdge
is used to sample the underlying textures.The result is in the color space set by colorSpace
.It is implementation-dependent whether, for any given external texture, the sampler (and filtering)is applied before or after conversion from underlying values into the specified color space.
Note: If the internal representation is an RGBA plane, sampling behaves as on a regular 2D texture.If there are several underlying planes (e.g. Y+UV), the sampler is used to sample eachunderlying texture separately, prior to conversion from YUV to the specified color space.
7. Samplers
7.1. GPUSampler
A GPUSampler
encodes transformations and filtering information that canbe used in a shader to interpret texture resource data.
GPUSampler
s are created via createSampler()
.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUSampler {};GPUSamplerincludes GPUObjectBase;
GPUSampler
has the following internal slots:
[[descriptor]]
, of typeGPUSamplerDescriptor
, readonly-
The
GPUSamplerDescriptor
with which theGPUSampler
was created. [[isComparison]]
, of typeboolean
-
Whether the
GPUSampler
is used as a comparison sampler. [[isFiltering]]
, of typeboolean
-
Whether the
GPUSampler
weights multiple samples of a texture.
7.1.1. GPUSamplerDescriptor
A GPUSamplerDescriptor
specifies the options to use to create a GPUSampler
.
dictionary : GPUObjectDescriptorBase { GPUAddressMode addressModeU = "clamp-to-edge"; GPUAddressMode addressModeV = "clamp-to-edge"; GPUAddressMode addressModeW = "clamp-to-edge"; GPUFilterMode magFilter = "nearest"; GPUFilterMode minFilter = "nearest"; GPUMipmapFilterMode mipmapFilter = "nearest";
GPUSamplerDescriptor float lodMinClamp = 0;float lodMaxClamp = 32; GPUCompareFunction compare; [Clamp ]unsigned short maxAnisotropy = 1;};
addressModeU
, of type GPUAddressMode, defaulting to"clamp-to-edge"
addressModeV
, of type GPUAddressMode, defaulting to"clamp-to-edge"
addressModeW
, of type GPUAddressMode, defaulting to"clamp-to-edge"
-
Specifies the
address modes
for the texture width, height, and depthcoordinates, respectively. magFilter
, of type GPUFilterMode, defaulting to"nearest"
-
Specifies the sampling behavior when the sample footprint is smaller than or equal to onetexel.
minFilter
, of type GPUFilterMode, defaulting to"nearest"
-
Specifies the sampling behavior when the sample footprint is larger than one texel.
mipmapFilter
, of type GPUMipmapFilterMode, defaulting to"nearest"
-
Specifies behavior for sampling between mipmap levels.
lodMinClamp
, of type float, defaulting to0
lodMaxClamp
, of type float, defaulting to32
-
Specifies the minimum and maximum levels of detail, respectively, used internally whensampling a texture.
compare
, of type GPUCompareFunction-
When provided the sampler will be a comparison sampler with the specified
GPUCompareFunction
.Note: Comparison samplers may use filtering, but the sampling results will beimplementation-dependent and may differ from the normal filtering rules.
maxAnisotropy
, of type unsigned short, defaulting to1
-
Specifies the maximum anisotropy value clamp used by the sampler.
Note: Most implementations support
maxAnisotropy
values in rangebetween 1 and 16, inclusive. The used value ofmaxAnisotropy
willbe clamped to the maximum value that the platform supports.
explain how LOD is calculated and if there are differences here between platforms.
explain what anisotropic sampling is
GPUAddressMode
describes the behavior of the sampler if the sample footprint extends beyondthe bounds of the sampled texture.
Describe a "sample footprint" in greater detail.
enum { "clamp-to-edge", "repeat", "mirror-repeat",};
GPUAddressMode
"clamp-to-edge"
-
Texture coordinates are clamped between 0.0 and 1.0, inclusive.
"repeat"
-
Texture coordinates wrap to the other side of the texture.
"mirror-repeat"
-
Texture coordinates wrap to the other side of the texture, but the texture is flippedwhen the integer part of the coordinate is odd.
GPUFilterMode
and GPUMipmapFilterMode
describe the behavior of the sampler if the sample footprint does not exactlymatch one texel.
enum { "nearest", "linear",};
GPUFilterMode enum {
GPUMipmapFilterMode ,
"nearest" ,};
"linear"
"nearest"
-
Return the value of the texel nearest to the texture coordinates.
"linear"
-
Select two texels in each dimension and return a linear interpolation between their values.
GPUCompareFunction
specifies the behavior of a comparison sampler. If a comparison sampler isused in a shader, an input value is compared to the sampled texture value, and the result of thiscomparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.
describe how filtering interacts with comparison sampling.
enum { "never", "less", "equal", "less-equal", "greater", "not-equal", "greater-equal", "always",};
GPUCompareFunction
"never"
-
Comparison tests never pass.
"less"
-
A provided value passes the comparison test if it is less than the sampled value.
"equal"
-
A provided value passes the comparison test if it is equal to the sampled value.
"less-equal"
-
A provided value passes the comparison test if it is less than or equal to the sampled value.
"greater"
-
A provided value passes the comparison test if it is greater than the sampled value.
"not-equal"
-
A provided value passes the comparison test if it is not equal to the sampled value.
"greater-equal"
-
A provided value passes the comparison test if it is greater than or equal to the sampled value.
"always"
-
Comparison tests always pass.
7.1.2. Sampler Creation
createSampler(descriptor)
-
Creates a
GPUSampler
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createSampler(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUSamplerDescriptor
✘ ✔ Description of the GPUSampler
to create.Returns:
GPUSampler
Content timeline steps:
-
Let s be a new
GPUSampler
object. -
Issue the initialization steps on the Device timeline of this.
-
Return s.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make s invalid, and stop.
-
this is valid.
-
descriptor.
lodMinClamp
≥ 0. -
descriptor.
lodMaxClamp
≥ descriptor.lodMinClamp
. -
descriptor.
maxAnisotropy
≥ 1.Note: Most implementations support
maxAnisotropy
values in range between 1 and 16, inclusive. The providedmaxAnisotropy
value will be clamped to themaximum value that the platform supports. -
If descriptor.
maxAnisotropy
> 1:-
descriptor.
magFilter
, descriptor.minFilter
,and descriptor.mipmapFilter
must be"linear"
.
-
-
-
Set s.
[[descriptor]]
to descriptor. -
Set s.
[[isComparison]]
tofalse
if thecompare
attribute of s.[[descriptor]]
isnull
or undefined. Otherwise, set it totrue
. -
Set s.
[[isFiltering]]
tofalse
if none ofminFilter
,magFilter
, ormipmapFilter
has the value of"linear"
. Otherwise, set it totrue
.
-
Creating a GPUSampler
that does trilinear filtering and repeats texture coordinates:
const sampler= gpuDevice. createSampler({ addressModeU: 'repeat' , addressModeV: 'repeat' , magFilter: 'linear' , minFilter: 'linear' , mipmapFilter: 'linear' , });
8. Resource Binding
8.1. GPUBindGroupLayout
A GPUBindGroupLayout
defines the interface between a set of resources bound in a GPUBindGroup
and their accessibility in shader stages.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBindGroupLayout {};GPUBindGroupLayoutincludes GPUObjectBase;
GPUBindGroupLayout
has the following internal slots:
[[descriptor]]
, of typeGPUBindGroupLayoutDescriptor
8.1.1. Bind Group Layout Creation
A GPUBindGroupLayout
is created via GPUDevice.createBindGroupLayout()
.
dictionary : GPUObjectDescriptorBase {
GPUBindGroupLayoutDescriptor required sequence <GPUBindGroupLayoutEntry>;};
entries
A GPUBindGroupLayoutEntry
describes a single shader resource binding to be included in a GPUBindGroupLayout
.
dictionary {
GPUBindGroupLayoutEntry required GPUIndex32 binding;required GPUShaderStageFlags visibility; GPUBufferBindingLayout buffer; GPUSamplerBindingLayout sampler; GPUTextureBindingLayout texture; GPUStorageTextureBindingLayout storageTexture; GPUExternalTextureBindingLayout externalTexture;};
GPUBindGroupLayoutEntry
dictionaries have the following members:
binding
, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroupLayout
, correspondingto aGPUBindGroupEntry.binding
and a @binding attribute in theGPUShaderModule
. visibility
, of type GPUShaderStageFlags-
A bitset of the members of
GPUShaderStage
.Each set bit indicates that aGPUBindGroupLayoutEntry
's resourcewill be accessible from the associated shader stage. buffer
, of type GPUBufferBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntry
isGPUBufferBinding
. sampler
, of type GPUSamplerBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntry
isGPUSampler
. texture
, of type GPUTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntry
isGPUTextureView
. storageTexture
, of type GPUStorageTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntry
isGPUTextureView
. externalTexture
, of type GPUExternalTextureBindingLayout-
When provided, indicates the binding resource type for this
GPUBindGroupLayoutEntry
isGPUExternalTexture
.
typedef [EnforceRange ]unsigned long ;[
GPUShaderStageFlags Exposed =(Window ,Worker ),SecureContext ]namespace {
GPUShaderStage const GPUFlagsConstant VERTEX = 0x1;const GPUFlagsConstant FRAGMENT = 0x2;const GPUFlagsConstant COMPUTE = 0x4;};
GPUShaderStage
contains the following flags, which describe which shader stages acorresponding GPUBindGroupEntry
for this GPUBindGroupLayoutEntry
will be visible to:
VERTEX
-
The bind group entry will be accessible to vertex shaders.
FRAGMENT
-
The bind group entry will be accessible to fragment shaders.
COMPUTE
-
The bind group entry will be accessible to compute shaders.
The binding member of a GPUBindGroupLayoutEntry
is determined by which member of the GPUBindGroupLayoutEntry
is defined: buffer
, sampler
, texture
, storageTexture
, or externalTexture
.Only one may be defined for any given GPUBindGroupLayoutEntry
.Each member has an associated GPUBindingResource
type and each binding type has an associated internal usage, given by this table:
Binding member | Resource type | Binding type | Binding usage |
---|---|---|---|
buffer | GPUBufferBinding | "uniform" | constant |
"storage" | storage | ||
"read-only-storage" | storage-read | ||
sampler | GPUSampler | "filtering" | constant |
"non-filtering" | |||
"comparison" | |||
texture | GPUTextureView | "float" | constant |
"unfilterable-float" | |||
"depth" | |||
"sint" | |||
"uint" | |||
storageTexture | GPUTextureView | "write-only" | storage |
"read-write" | |||
"read-only" | storage-read | ||
externalTexture | GPUExternalTexture | constant |
The list of GPUBindGroupLayoutEntry
values entries exceeds the binding slot limits of supported limits limits if the number of slots used toward a limit exceeds the supported value in limits. Each entry may use multiple slots toward multiple limits.
-
For each entry in entries, if:
- entry.
buffer
?.type
is"uniform"
and entry.buffer
?.hasDynamicOffset
istrue
-
Consider 1
maxDynamicUniformBuffersPerPipelineLayout
slot to be used. - entry.
buffer
?.type
is"storage"
and entry.buffer
?.hasDynamicOffset
istrue
-
Consider 1
maxDynamicStorageBuffersPerPipelineLayout
slot to be used.
- entry.
-
For each shader stage stage in«
VERTEX
,FRAGMENT
,COMPUTE
»:-
For each entry in entries for which entry.
visibility
contains stage, if:- entry.
buffer
?.type
is"uniform"
-
Consider 1
maxUniformBuffersPerShaderStage
slot to be used. - entry.
buffer
?.type
is"storage"
or"read-only-storage"
-
Consider 1
maxStorageBuffersPerShaderStage
slot to be used. - entry.
sampler
is provided -
Consider 1
maxSamplersPerShaderStage
slot to be used. - entry.
texture
is provided -
Consider 1
maxSampledTexturesPerShaderStage
slot to be used. - entry.
storageTexture
is provided -
Consider 1
maxStorageTexturesPerShaderStage
slot to be used. - entry.
externalTexture
is provided -
Consider4
maxSampledTexturesPerShaderStage
slot,1maxSamplersPerShaderStage
slot, and1maxUniformBuffersPerShaderStage
slotto be used.
- entry.
-
enum {
GPUBufferBindingType ,
"uniform" ,
"storage" ,};
"read-only-storage" dictionary { GPUBufferBindingType type = "uniform";
GPUBufferBindingLayout boolean hasDynamicOffset =false ; GPUSize64 minBindingSize = 0;};
GPUBufferBindingLayout
dictionaries have the following members:
type
, of type GPUBufferBindingType, defaulting to"uniform"
-
Indicates the type required for buffers bound to this bindings.
hasDynamicOffset
, of type boolean, defaulting tofalse
-
Indicates whether this binding requires a dynamic offset.
minBindingSize
, of type GPUSize64, defaulting to0
-
Indicates the minimum
size
of a buffer binding used with this bind point.Bindings are always validated against this size in
createBindGroup()
.If this is not
0
, pipeline creation additionally validates that this value ≥ the minimum buffer binding size of the variable.If this is
0
, it is ignored by pipeline creation, and instead draw/dispatch commands validate that each binding in theGPUBindGroup
satisfies the minimum buffer binding size of the variable.Note: Similar execution-time validation is theoretically possible for otherbinding-related fields specified for early validation, like
sampleType
andformat
,which currently can only be validated in pipeline creation.However, such execution-time validation could be costly or unnecessarily complex, so it isavailable only forminBindingSize
which is expected to have themost ergonomic impact.
enum {
GPUSamplerBindingType ,
"filtering" ,
"non-filtering" ,};
"comparison" dictionary { GPUSamplerBindingType type = "filtering";};
GPUSamplerBindingLayout
GPUSamplerBindingLayout
dictionaries have the following members:
type
, of type GPUSamplerBindingType, defaulting to"filtering"
-
Indicates the required type of a sampler bound to this bindings.
enum {
GPUTextureSampleType ,
"float" ,
"unfilterable-float" ,
"depth" ,
"sint" ,};
"uint" dictionary { GPUTextureSampleType sampleType = "float"; GPUTextureViewDimension viewDimension = "2d";
GPUTextureBindingLayout boolean multisampled =false ;};
GPUTextureBindingLayout
dictionaries have the following members:
sampleType
, of type GPUTextureSampleType, defaulting to"float"
-
Indicates the type required for texture views bound to this binding.
viewDimension
, of type GPUTextureViewDimension, defaulting to"2d"
-
Indicates the required
dimension
for texture views bound tothis binding. multisampled
, of type boolean, defaulting tofalse
-
Indicates whether or not texture views bound to this binding must be multisampled.
enum {
GPUStorageTextureAccess ,
"write-only" ,
"read-only" ,};
"read-write" dictionary { GPUStorageTextureAccess access = "write-only";
GPUStorageTextureBindingLayout required GPUTextureFormat format; GPUTextureViewDimension viewDimension = "2d";};
GPUStorageTextureBindingLayout
dictionaries have the following members:
access
, of type GPUStorageTextureAccess, defaulting to"write-only"
-
The access mode for this binding, indicating readability and writability.
format
, of type GPUTextureFormat-
The required
format
of texture views bound to this binding. viewDimension
, of type GPUTextureViewDimension, defaulting to"2d"
-
Indicates the required
dimension
for texture views bound tothis binding.
dictionary {};
GPUExternalTextureBindingLayout
A GPUBindGroupLayout
object has the following internal slots:
[[entryMap]]
, of type ordered map<GPUSize32
,GPUBindGroupLayoutEntry
>-
The map of binding indices pointing to the
GPUBindGroupLayoutEntry
s,which thisGPUBindGroupLayout
describes. [[dynamicOffsetCount]]
, of typeGPUSize32
-
The number of buffer bindings with dynamic offsets in this
GPUBindGroupLayout
. [[exclusivePipeline]]
, of typeGPUPipelineBase
?, initiallynull
-
The pipeline that created this
GPUBindGroupLayout
, if it was created as part of a default pipeline layout. If notnull
,GPUBindGroup
screated with thisGPUBindGroupLayout
can only be used with the specifiedGPUPipelineBase
.
createBindGroupLayout(descriptor)
-
Creates a
GPUBindGroupLayout
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBindGroupLayout(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBindGroupLayoutDescriptor
✘ ✘ Description of the GPUBindGroupLayout
to create.Returns:
GPUBindGroupLayout
Content timeline steps:
-
For each
GPUBindGroupLayoutEntry
entry in descriptor.entries
: -
Let layout be a new
GPUBindGroupLayout
object. -
Issue the initialization steps on the Device timeline of this.
-
Return layout.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make layout invalid, and stop.
-
this is valid.
-
Let limits be this.
[[device]]
.[[limits]]
. -
The
binding
of each entry in descriptor is unique. -
The
binding
of each entry in descriptor must be < limits.maxBindingsPerBindGroup
. -
descriptor.
entries
must not exceed the binding slot limits of limits. -
For each
GPUBindGroupLayoutEntry
entry in descriptor.entries
:-
Exactly one of entry.
buffer
, entry.sampler
, entry.texture
, and entry.storageTexture
is provided. -
entry.
visibility
contains only bits defined inGPUShaderStage
. -
If entry.
visibility
includesVERTEX
:-
entry.
buffer
?.type
must not be"storage"
.Note that"read-only-storage"
is allowed. -
entry.
storageTexture
?.access
must be"read-only"
.
-
-
If entry.
texture
?.multisampled
istrue
:-
entry.
texture
.viewDimension
is"2d"
. -
entry.
texture
.sampleType
is not"float"
.
-
-
If entry.
storageTexture
is provided:-
entry.
storageTexture
.viewDimension
is not"cube"
or"cube-array"
. -
entry.
storageTexture
.format
must be a formatwhich can support storage usage for the given entry.storageTexture
.access
according to the § 26.1.1 Plain color formats table.
-
-
-
-
Set layout.
[[descriptor]]
to descriptor. -
Set layout.
[[dynamicOffsetCount]]
to the number ofentries in descriptor wherebuffer
is provided andbuffer
.hasDynamicOffset
istrue
. -
For each
GPUBindGroupLayoutEntry
entry in descriptor.entries
:-
Insert entry into layout.
[[entryMap]]
with the key of entry.binding
.
-
-
8.1.2. Compatibility
Two GPUBindGroupLayout
objects a and b are considered group-equivalent if and only if all of the following conditions are satisfied:
-
a.
[[exclusivePipeline]]
== b.[[exclusivePipeline]]
. -
for any binding number binding, one of the following conditions is satisfied:
-
it’s missing from both a.
[[entryMap]]
and b.[[entryMap]]
. -
a.
[[entryMap]]
[binding] == b.[[entryMap]]
[binding]
-
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
8.2. GPUBindGroup
A GPUBindGroup
defines a set of resources to be bound together in a group and how the resources are used in shader stages.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUBindGroup {};GPUBindGroupincludes GPUObjectBase;
A GPUBindGroup
object has the following internal slots:
[[layout]]
, of typeGPUBindGroupLayout
, readonly-
The
GPUBindGroupLayout
associated with thisGPUBindGroup
. [[entries]]
, of type sequence<GPUBindGroupEntry
>, readonly-
The set of
GPUBindGroupEntry
s thisGPUBindGroup
describes. [[usedResources]]
, of type ordered map<subresource, list<internal usage>>, readonly-
The set of buffer and texture subresources used by this bind group,associated with lists of the internal usage flags.
The bound buffer ranges of a GPUBindGroup
bindGroup, given list<GPUBufferDynamicOffset> dynamicOffsets, are computed as follows:
-
Let result be a new set<(
GPUBindGroupLayoutEntry
,GPUBufferBinding
)>. -
Let dynamicOffsetIndex be 0.
-
For each
GPUBindGroupEntry
bindGroupEntry in bindGroup.[[entries]]
,sorted by bindGroupEntry.binding
:-
Let bindGroupLayoutEntry be bindGroup.
[[layout]]
.[[entryMap]]
[bindGroupEntry.binding
]. -
If bindGroupLayoutEntry.
buffer
is not provided, continue. -
Let bound be a copy of bindGroupEntry.
resource
. -
Assert bound is a
GPUBufferBinding
. -
If bindGroupLayoutEntry.
buffer
.hasDynamicOffset
:-
Increment bound.
offset
by dynamicOffsets[dynamicOffsetIndex]. -
Increment dynamicOffsetIndex by 1.
-
-
Append (bindGroupLayoutEntry, bound) to result.
-
-
Return result.
8.2.1. Bind Group Creation
A GPUBindGroup
is created via GPUDevice.createBindGroup()
.
dictionary : GPUObjectDescriptorBase {
GPUBindGroupDescriptor required GPUBindGroupLayout layout;required sequence <GPUBindGroupEntry> entries;};
GPUBindGroupDescriptor
dictionaries have the following members:
layout
, of type GPUBindGroupLayout-
The
GPUBindGroupLayout
the entries of this bind group will conform to. entries
, of type sequence<GPUBindGroupEntry>-
A list of entries describing the resources to expose to the shader for each bindingdescribed by the
layout
.
typedef (GPUSampleror GPUTextureViewor GPUBufferBindingor GPUExternalTexture);
GPUBindingResource dictionary {
GPUBindGroupEntry required GPUIndex32 binding;required GPUBindingResource resource;};
A GPUBindGroupEntry
describes a single resource to be bound in a GPUBindGroup
, and has thefollowing members:
binding
, of type GPUIndex32-
A unique identifier for a resource binding within the
GPUBindGroup
, corresponding to aGPUBindGroupLayoutEntry.binding
and a @binding attribute in theGPUShaderModule
. resource
, of type GPUBindingResource-
The resource to bind, which may be a
GPUSampler
,GPUTextureView
,GPUExternalTexture
, orGPUBufferBinding
.
dictionary {
GPUBufferBinding required GPUBuffer buffer; GPUSize64 offset = 0; GPUSize64 size;};
A GPUBufferBinding
describes a buffer and optional range to bind as a resource, and has thefollowing members:
buffer
, of type GPUBuffer-
The
GPUBuffer
to bind. offset
, of type GPUSize64, defaulting to0
-
The offset, in bytes, from the beginning of
buffer
to thebeginning of the range exposed to the shader by the buffer binding. size
, of type GPUSize64-
The size, in bytes, of the buffer binding.If not provided, specifies the range starting at
offset
and ending at the end ofbuffer
.
createBindGroup(descriptor)
-
Creates a
GPUBindGroup
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createBindGroup(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUBindGroupDescriptor
✘ ✘ Description of the GPUBindGroup
to create.Returns:
GPUBindGroup
Content timeline steps:
-
Let bindGroup be a new
GPUBindGroup
object. -
Issue the initialization steps on the Device timeline of this.
-
Return bindGroup.
Device timeline initialization steps:
-
Let limits be this.
[[device]]
.[[limits]]
. -
If any of the following conditions are unsatisfied generate a validation error, make bindGroup invalid, and stop.
-
descriptor.
layout
is valid to use with this. -
The number of
entries
of descriptor.layout
is exactly equal tothe number of descriptor.entries
.
For each
GPUBindGroupEntry
bindingDescriptor in descriptor.entries
:-
Let resource be bindingDescriptor.
resource
. -
There is exactly one
GPUBindGroupLayoutEntry
layoutBinding in descriptor.layout
.entries
such that layoutBinding.binding
equals to bindingDescriptor.binding
. -
If the defined binding member for layoutBinding is
sampler
-
-
resource is a
GPUSampler
. -
resource is valid to use with this.
-
If layoutBinding.
sampler
.type
is:"filtering"
-
resource.
[[isComparison]]
isfalse
. "non-filtering"
-
resource.
[[isFiltering]]
isfalse
. resource.[[isComparison]]
isfalse
. "comparison"
-
resource.
[[isComparison]]
istrue
.
-
texture
-
-
resource is a
GPUTextureView
. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]
. -
layoutBinding.
texture
.viewDimension
is equal to resource’sdimension
. -
layoutBinding.
texture
.sampleType
is compatible with resource’sformat
. -
texture’s
usage
includesTEXTURE_BINDING
. -
If layoutBinding.
texture
.multisampled
istrue
, texture’ssampleCount
>1
, Otherwise texture’ssampleCount
is1
.
-
storageTexture
-
-
resource is a
GPUTextureView
. -
resource is valid to use with this.
-
Let texture be resource.
[[texture]]
. -
layoutBinding.
storageTexture
.viewDimension
is equal to resource’sdimension
. -
layoutBinding.
storageTexture
.format
is equal to resource.[[descriptor]]
.format
. -
texture’s
usage
includesSTORAGE_BINDING
. -
resource.
[[descriptor]]
.mipLevelCount
must be 1.
-
buffer
-
-
resource is a
GPUBufferBinding
. -
resource.
buffer
is valid to use with this. -
The bound part designated by resource.
offset
and resource.size
resides inside the buffer and has non-zero size. -
effective buffer binding size(resource) ≥ layoutBinding.
buffer
.minBindingSize
. -
If layoutBinding.
buffer
.type
is"uniform"
-
-
resource.
buffer
.usage
includesUNIFORM
. -
effective buffer binding size(resource) ≤ limits.
maxUniformBufferBindingSize
. -
resource.
offset
is a multiple of limits.minUniformBufferOffsetAlignment
.
-
"storage"
or"read-only-storage"
-
-
resource.
buffer
.usage
includesSTORAGE
. -
effective buffer binding size(resource) ≤ limits.
maxStorageBufferBindingSize
. -
effective buffer binding size(resource) is a multiple of 4.
-
resource.
offset
is a multiple of limits.minStorageBufferOffsetAlignment
.
-
-
externalTexture
-
-
resource is a
GPUExternalTexture
. -
resource is valid to use with this.
-
-
-
Let bindGroup.
[[layout]]
= descriptor.layout
. -
Let bindGroup.
[[entries]]
= descriptor.entries
. -
Let bindGroup.
[[usedResources]]
= {}. -
For each
GPUBindGroupEntry
bindingDescriptor in descriptor.entries
:-
Let internalUsage be the binding usage for layoutBinding.
-
Each subresource seen by resource is added to
[[usedResources]]
as internalUsage.
-
-
effective buffer binding size(binding)
-
If binding.
size
is not provided:-
Return max(0, binding.
buffer
.size
- binding.offset
);
-
-
Return binding.
size
.
Two GPUBufferBinding
objects a and b are considered buffer-binding-aliasing if and only if all of the following are true:
-
a.
buffer
== b.buffer
-
The range formed by a.
offset
and a.size
intersectsthe range formed by b.offset
and b.size
,where if asize
is unspecified,the range goes to the end of the buffer.
Note: When doing this calculation, any dynamic offsets have already been applied to the ranges.
8.3. GPUPipelineLayout
A GPUPipelineLayout
defines the mapping between resources of all GPUBindGroup
objects set up during command encoding in setBindGroup(), and the shaders of the pipeline set by GPURenderCommandsMixin.setPipeline
or GPUComputePassEncoder.setPipeline
.
The full binding address of a resource can be defined as a trio of:
-
shader stage mask, to which the resource is visible
-
bind group index
-
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup
(with the corresponding GPUBindGroupLayout
) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Exposed =(Window ,Worker ),SecureContext ]interface GPUPipelineLayout {};GPUPipelineLayoutincludes GPUObjectBase;
GPUPipelineLayout
has the following internal slots:
[[bindGroupLayouts]]
, of type list<GPUBindGroupLayout
>-
The
GPUBindGroupLayout
objects provided at creation inGPUPipelineLayoutDescriptor.bindGroupLayouts
.
Note: using the same GPUPipelineLayout
for many GPURenderPipeline
or GPUComputePipeline
pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline
object X was created with GPUPipelineLayout.bindGroupLayouts
A, B, C. GPUComputePipeline
object Y was created with GPUPipelineLayout.bindGroupLayouts
A, D, C. Supposing the command encoding sequence has two dispatches:
-
setBindGroup(0, ...)
-
setBindGroup(1, ...)
-
setBindGroup(2, ...)
-
setPipeline
(X) -
dispatchWorkgroups
() -
setBindGroup(1, ...)
-
setPipeline
(Y) -
dispatchWorkgroups
()
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout
at index 2 of GPUPipelineLayout.bindGroupLayouts
, or the GPUBindGroup
at slot 2, change.
Note: the expected usage of the GPUPipelineLayout
is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
8.3.1. Pipeline Layout Creation
A GPUPipelineLayout
is created via GPUDevice.createPipelineLayout()
.
dictionary : GPUObjectDescriptorBase {
GPUPipelineLayoutDescriptor required sequence <GPUBindGroupLayout> bindGroupLayouts;};
GPUPipelineLayoutDescriptor
dictionaries define all the GPUBindGroupLayout
s used by apipeline, and have the following members:
bindGroupLayouts
, of type sequence<GPUBindGroupLayout>-
A list of
GPUBindGroupLayout
s the pipeline will use. Each element corresponds to a @group attribute in theGPUShaderModule
, with theN
th element corresponding with@group(N)
.
createPipelineLayout(descriptor)
-
Creates a
GPUPipelineLayout
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createPipelineLayout(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUPipelineLayoutDescriptor
✘ ✘ Description of the GPUPipelineLayout
to create.Returns:
GPUPipelineLayout
Content timeline steps:
-
Let pl be a new
GPUPipelineLayout
object. -
Issue the initialization steps on the Device timeline of this.
-
Return pl.
Device timeline initialization steps:
-
Let limits be this.
[[device]]
.[[limits]]
. -
Let allEntries be the result of concatenating bgl.
[[descriptor]]
.entries
for all bgl in descriptor.bindGroupLayouts
. -
If any of the following conditions are unsatisfied generate a validation error, make pl invalid, and stop.
-
Every
GPUBindGroupLayout
in descriptor.bindGroupLayouts
must be valid to use with this and have a[[exclusivePipeline]]
ofnull
. -
The size of descriptor.
bindGroupLayouts
must be ≤ limits.maxBindGroups
. -
allEntries must not exceed the binding slot limits of limits.
-
-
Set the pl.
[[bindGroupLayouts]]
to descriptor.bindGroupLayouts
.
-
Note: two GPUPipelineLayout
objects are considered equivalent for any usageif their internal [[bindGroupLayouts]]
sequences contain GPUBindGroupLayout
objects that are group-equivalent.
8.4. Example
Create a GPUBindGroupLayout
that describes a binding with a uniform buffer, a texture, and a sampler. Then create a GPUBindGroup
and a GPUPipelineLayout
using the GPUBindGroupLayout
.
const bindGroupLayout= gpuDevice. createBindGroupLayout({ entries: [{ binding: 0 , visibility: GPUShaderStage. VERTEX| GPUShaderStage. FRAGMENT, buffer: {} }, { binding: 1 , visibility: GPUShaderStage. FRAGMENT, texture: {} }, { binding: 2 , visibility: GPUShaderStage. FRAGMENT, sampler: {} }] }); const bindGroup= gpuDevice. createBindGroup({ layout: bindGroupLayout, entries: [{ binding: 0 , resource: { buffer: buffer}, }, { binding: 1 , resource: texture}, { binding: 2 , resource: sampler}] }); const pipelineLayout= gpuDevice. createPipelineLayout({ bindGroupLayouts: [ bindGroupLayout] });
9. Shader Modules
9.1. GPUShaderModule
[Exposed =(Window ,Worker ),SecureContext ]interface GPUShaderModule {Promise <GPUCompilationInfo> getCompilationInfo();};GPUShaderModuleincludes GPUObjectBase;
GPUShaderModule
is a reference to an internal shader module object.
9.1.1. Shader Module Creation
dictionary : GPUObjectDescriptorBase {
GPUShaderModuleDescriptor required USVString code;object sourceMap;sequence <GPUShaderModuleCompilationHint> compilationHints = [];};
code
, of type USVString-
The WGSL source code for the shadermodule.
sourceMap
, of type object-
If defined MAY be interpreted as a source-map-v3 format.
Source maps are optional, but serve as a standardized way to support dev-toolintegration such as source-language debugging [SourceMap].WGSL names (identifiers) in source maps follow the rules defined in WGSL identifiercomparison.
compilationHints
, of type sequence<GPUShaderModuleCompilationHint>, defaulting to[]
-
A list of
GPUShaderModuleCompilationHint
s.Any hint provided by an application should contain information about one entry point ofa pipeline that will eventually be created from the entry point.
Implementations should use any information present in the
GPUShaderModuleCompilationHint
to perform as much compilation as is possible withincreateShaderModule()
.Aside from type-checking, these hints are not validated in any way.
NOTE:
Supplying information in
compilationHints
does not have any observable effect, other than performance. It may be detrimental to performance to provide hints for pipelines that never end up being created.Because a single shader module can hold multiple entry points, and multiple pipelines can be created from a single shader module, it can be more performant for an implementation to do as much compilation as possible once in
createShaderModule()
rather than multiple times in the multiple calls tocreateComputePipeline()
orcreateRenderPipeline()
.Note: Hints are not validated in an observable way, but user agents may surface identifiableerrors (like unknown entry point names or incompatible pipeline layouts) to developers,for example in the browser developer console.
createShaderModule(descriptor)
-
Creates a
GPUShaderModule
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createShaderModule(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUShaderModuleDescriptor
✘ ✘ Description of the GPUShaderModule
to create.Returns:
GPUShaderModule
Content timeline steps:
-
Let sm be a new
GPUShaderModule
object. -
Issue the initialization steps on the Device timeline of this.
-
Return sm.
Device timeline initialization steps:
-
Let result be the result of shader module creation with the WGSL source descriptor.
code
. -
If any of the following requirements are unmet, generate a validation error, make sm invalid, and return.
-
this must be valid.
-
result must not be a shader-creation program error.
Note: Uncategorized errors cannot arise from shader module creation.Implementations which detect such errors during shader module creationmust behave as if the shader module is valid, and defer surfacing theerror until pipeline creation.
-
Describe remaining
createShaderModule()
validation and algorithm steps.NOTE:
User agents should not include detailed compiler error messages or shader text in the
message
text of validation errors arising here: these details are accessible viagetCompilationInfo()
. User agents should surface human-readable, formatted error details to developers for easier debugging (for example as a warning in the browser developer console, expandable to show full shader source).As shader compilation errors should be rare in production applications, user agents could choose to surface them to developers regardless of error handling (GPU error scopes or
uncapturederror
event handlers), e.g. as an expandable warning. If not, they should provide and document another way for developers to access human-readable error details, for example by adding a checkbox to show errors unconditionally, or by showing human-readable details when logging aGPUCompilationInfo
object to the console. -
Create a GPUShaderModule
from WGSL code:
// A simple vertex and fragment shader pair that will fill the viewport with red. const shaderSource= ` var<private> pos : array<vec2<f32>, 3> = array<vec2<f32>, 3>( vec2(-1.0, -1.0), vec2(-1.0, 3.0), vec2(3.0, -1.0)); @vertex fn vertexMain(@builtin(vertex_index) vertexIndex : u32) -> @builtin(position) vec4<f32> { return vec4(pos[vertexIndex], 1.0, 1.0); } @fragment fn fragmentMain() -> @location(0) vec4<f32> { return vec4(1.0, 0.0, 0.0, 1.0); } ` ; const shaderModule= gpuDevice. createShaderModule({ code: shaderSource, });
9.1.1.1. Shader Module Compilation Hints
Shader module compilation hints are optional, additional information indicating how a given GPUShaderModule
entry point is intended to be used in the future. For some implementations thisinformation may aid in compiling the shader module earlier, potentially increasing performance.
dictionary {
GPUShaderModuleCompilationHint required USVString ; (GPUPipelineLayout
entryPoint or GPUAutoLayoutMode) layout;};
layout
, of type(GPUPipelineLayout or GPUAutoLayoutMode)
-
A
GPUPipelineLayout
that theGPUShaderModule
may be used with in a futurecreateComputePipeline()
orcreateRenderPipeline()
call.If set to"auto"
the layout will be the default pipeline layout for the entry point associated with this hint will be used.
NOTE:
If possible, authors should be supplying the same information to createShaderModule()
and createComputePipeline()
/ createRenderPipeline()
.
If an application is unable to provide hint information at the time of calling createShaderModule()
, it should usually not delay calling createShaderModule()
, but instead just omit the unknown information from the compilationHints
sequence or the individual members of GPUShaderModuleCompilationHint
. Omitting this information may cause compilation to be deferred to createComputePipeline()
/ createRenderPipeline()
.
If an author is not confident that the hint information passed to createShaderModule()
will match the information later passed to createComputePipeline()
/ createRenderPipeline()
with that same module, they should avoid passing that information to createShaderModule()
, as passing mismatched information to createShaderModule()
may cause unnecessary compilations to occur.
9.1.2. Shader Module Compilation Information
enum {
GPUCompilationMessageType ,
"error" ,
"warning" ,};[
"info" Exposed =(Window ,Worker ),Serializable ,SecureContext ]interface {
GPUCompilationMessage readonly attribute DOMString message;readonly attribute GPUCompilationMessageType type;readonly attribute unsigned long long lineNum;readonly attribute unsigned long long linePos;readonly attribute unsigned long long offset;readonly attribute unsigned long long length;};[Exposed =(Window ,Worker ),Serializable ,SecureContext ]interface {
GPUCompilationInfo readonly attribute FrozenArray <GPUCompilationMessage>;};
messages
A GPUCompilationMessage
is an informational, warning, or error message generated by the GPUShaderModule
compiler. The messages are intended to be human readable to help developersdiagnose issues with their shader code
. Each message may correspond toeither a single point in the shader code, a substring of the shader code, or may not correspond toany specific point in the code at all.
GPUCompilationMessage
has the following attributes:
message
, of type DOMString, readonly-
The human-readable, localizable text for this compilation message.
Note: The
message
should follow the best practices for languageand direction information. This includes making use of any future standards which mayemerge regarding the reporting of string language and direction metadata.Editorial note: At the time of this writing, no language/direction recommendation is available that providescompatibility and consistency with legacy APIs, but when there is, adopt it formally.
type
, of type GPUCompilationMessageType, readonly-
The severity level of the message.
If the
type
is"error"
, itcorresponds to a shader-creation error. lineNum
, of type unsigned long long, readonly-
The line number in the shader
code
themessage
corresponds to. Value is one-based, such that a lineNum of1
indicates the first line of the shadercode
. Lines aredelimited by line breaks.If the
message
corresponds to a substring this points tothe line on which the substring begins. Must be0
if themessage
does not correspond to any specific point in the shadercode
. linePos
, of type unsigned long long, readonly-
The offset, in UTF-16 code units, from the beginning of line
lineNum
of the shadercode
to the point or beginning of the substringthat themessage
corresponds to. Value is one-based, such that alinePos
of1
indicates the first code unit of the line.If
message
corresponds to a substring this points to thefirst UTF-16 code unit of the substring. Must be0
if themessage
does not correspond to any specific point in the shadercode
. offset
, of type unsigned long long, readonly-
The offset from the beginning of the shader
code
in UTF-16code units to the point or beginning of the substring thatmessage
corresponds to. Must reference the same position aslineNum
andlinePos
. Must be0
if themessage
does not correspond to any specific point in the shadercode
. length
, of type unsigned long long, readonly-
The number of UTF-16 code units in the substring that
message
corresponds to. If the message does not correspond with a substring thenlength
must be 0.
Note: GPUCompilationMessage
.lineNum
and GPUCompilationMessage
.linePos
are one-based since the most common usefor them is expected to be printing human readable messages that can be correlated with the line andcolumn numbers shown in many text editors.
Note: GPUCompilationMessage
.offset
and GPUCompilationMessage
.length
are appropriate to pass to substr()
in order to retrieve the substring of the shader code
the message
corresponds to.
getCompilationInfo()
-
Returns any messages generated during the
GPUShaderModule
's compilation.The locations, order, and contents of messages are implementation-defined.In particular, messages may not be ordered by
lineNum
.Called on:
GPUShaderModule
thisReturns:
Promise
<GPUCompilationInfo
>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the synchronization steps on the Device timeline of this.
-
Return promise.
Device timeline synchronization steps:
-
When the device timeline becomes informed that shader module creation hascompleted for this:
-
Let messages be a list of any errors, warnings, or informational messagesgenerated during shader module creation for this.
-
Issue the subsequent steps on contentTimeline.
-
Content timeline steps:
-
Let info be a new
GPUCompilationInfo
. -
For each message in messages:
-
Let m be a new
GPUCompilationMessage
. -
Set m.
message
to be the text of message. -
- If message is a shader-creation error:
-
Set m.
type
to"error"
- If message is a warning:
-
Set m.
type
to"warning"
- Otherwise:
-
Set m.
type
to"info"
-
- If message is associated with a specific substring or positionwithin the shader
code
: -
-
Set m.
lineNum
to the one-based numberof the first line that the message refers to. -
Set m.
linePos
to the one-based numberof the first UTF-16 code units on m.lineNum
that the message refers to, or1
if the message refers tothe entire line. -
Set m.
offset
to the number of UTF-16code units from the beginning of the shader to beginning of thesubstring or position that message refers to. -
Set m.
length
the length of thesubstring in UTF-16 code units that message refers to, or 0if message refers to a position
-
- Otherwise:
-
-
Set m.
lineNum
to0
. -
Set m.
linePos
to0
. -
Set m.
offset
to0
. -
Set m.
length
to0
.
-
- If message is associated with a specific substring or positionwithin the shader
-
Append m to info.
messages
.
-
-
Resolve promise with info.
-
10. Pipelines
A pipeline, be it GPUComputePipeline
or GPURenderPipeline
,represents the complete function done by a combination of the GPU hardware, the driver,and the user agent, that process the input data in the shape of bindings and vertex buffers,and produces some output, like the colors in the output render targets.
Structurally, the pipeline consists of a sequence of programmable stages (shaders)and fixed-function states, such as the blending modes.
Note: Internally, depending on the target platform,the driver may convert some of the fixed-function states into shader code,and link it together with the shaders provided by the user.This linking is one of the reason the object is created as a whole.
This combination state is created as a single object(a GPUComputePipeline
or GPURenderPipeline
)and switched using one command(GPUComputePassEncoder
.setPipeline()
or GPURenderCommandsMixin
.setPipeline()
respectively).
There are two ways to create pipelines:
- immediate pipeline creation
-
createComputePipeline()
andcreateRenderPipeline()
return a pipeline object which can be used immediately in a pass encoder.When this fails, the pipeline object will be invalid and the call will generate either a validation error or an internal error.
Note: A handle object is returned immediately, but actual pipeline creation is not synchronous.If pipeline creation takes a long time, this can incur a stall in the device timeline at some point between the creation call and execution of the
submit()
in which it is first used.The point is unspecified, but most likely to be one of: at creation, at the first usage of thepipeline insetPipeline()
, at the correspondingfinish()
of thatGPUCommandEncoder
orGPURenderBundleEncoder
, or atsubmit()
of thatGPUCommandBuffer
. - async pipeline creation
-
createComputePipelineAsync()
andcreateRenderPipelineAsync()
return aPromise
which resolves to a pipeline object when creation of the pipeline hascompleted.When this fails, the
Promise
rejects with aGPUPipelineError
.
GPUPipelineError
describes a pipeline creation failure.
[Exposed =(Window ,Worker ),SecureContext ,Serializable ]interface GPUPipelineError :DOMException { constructor(optional DOMString message = "", GPUPipelineErrorInit options);readonly attribute GPUPipelineErrorReason reason;};dictionary {
GPUPipelineErrorInit required GPUPipelineErrorReason;};
reason enum GPUPipelineErrorReason { "validation", "internal",};
GPUPipelineError
constructor:
constructor()
-
Arguments for the GPUPipelineError.constructor() method. Parameter Type Nullable Optional Description message
DOMString
✘ ✔ Error message of the base DOMException
.options
GPUPipelineErrorInit
✘ ✘ Options specific to GPUPipelineError
.
GPUPipelineError
has the following attributes:
reason
, of type GPUPipelineErrorReason, readonly-
A read-only slot-backed attribute exposing the type of error encountered in pipeline creationas a
GPUPipelineErrorReason
:-
"validation"
: A validation error. -
"internal"
: An internal error.
-
GPUPipelineError
objects are serializable objects.
Their serialization steps, given value and serialized, are:
-
Run the
DOMException
serialization steps given value and serialized.
Their deserialization steps, given value and serialized, are:
-
Run the
DOMException
deserialization steps given value and serialized.
10.1. Base pipelines
enum {
GPUAutoLayoutMode ,};
"auto" dictionary : GPUObjectDescriptorBase {
GPUPipelineDescriptorBase required (GPUPipelineLayoutor GPUAutoLayoutMode) layout;};
layout
, of type(GPUPipelineLayout or GPUAutoLayoutMode)
-
The
GPUPipelineLayout
for this pipeline, or"auto"
to generatethe pipeline layout automatically.Note: If
"auto"
is used the pipeline cannot shareGPUBindGroup
swith any other pipelines.
interface mixin { [
GPUPipelineBase NewObject ] GPUBindGroupLayout getBindGroupLayout(unsigned long index);};
GPUPipelineBase
has the following internal slots:
[[layout]]
, of typeGPUPipelineLayout
-
The definition of the layout of resources which can be used with
this
.
GPUPipelineBase
has the following methods:
getBindGroupLayout(index)
-
Gets a
GPUBindGroupLayout
that is compatible with theGPUPipelineBase
'sGPUBindGroupLayout
atindex
.Called on:
GPUPipelineBase
thisArguments:
Arguments for the GPUPipelineBase.getBindGroupLayout(index) method. Parameter Type Nullable Optional Description index
unsigned long
✘ ✘ Index into the pipeline layout’s [[bindGroupLayouts]]
sequence.Returns:
GPUBindGroupLayout
Content timeline steps:
-
Let layout be a new
GPUBindGroupLayout
object. -
Issue the initialization steps on the Device timeline of this.
-
Return layout.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make layout invalid, and stop.
-
this is valid.
-
index < the size of this.
[[layout]]
.[[bindGroupLayouts]]
-
-
Initialize layout so it is a copy of this.
[[layout]]
.[[bindGroupLayouts]]
[index].Note:
GPUBindGroupLayout
is only ever used by-value, not by-reference,so this is equivalent to returning the same internal object in a new wrapper.A newGPUBindGroupLayout
wrapper is returned each time to avoid a round-tripbetween the Content timeline and the Device timeline.
-
10.1.1. Default pipeline layout
A GPUPipelineBase
object that was created with a layout
set to "auto"
has a default layout created and used instead.
Note: Default layouts are provided as a convenience for simple pipelines, but use of explicit layoutsis recommended in most cases. Bind groups created from default layouts cannot be used with otherpipelines, and the structure of the default layout may change when altering shaders, causingunexpected bind group creation errors.
To create a default pipeline layout for GPUPipelineBase
pipeline,run the following steps:
-
Let groupCount be 0.
-
Let groupDescs be a sequence of device.
[[limits]]
.maxBindGroups
newGPUBindGroupLayoutDescriptor
objects. -
For each groupDesc in groupDescs:
-
Set groupDesc.
entries
to an empty sequence.
-
-
For each
GPUProgrammableStage
stageDesc in the descriptor used to create pipeline:-
Let shaderStage be the
GPUShaderStageFlags
for the shader stageat which stageDesc is used in pipeline. -
Let entryPoint be get the entry point(shaderStage, stageDesc). Assert entryPoint is not
null
. -
For each resource resource statically used by entryPoint:
-
Let group be resource’s "group" decoration.
-
Let binding be resource’s "binding" decoration.
-
Let entry be a new
GPUBindGroupLayoutEntry
. -
Set entry.
binding
to binding. -
Set entry.
visibility
to shaderStage. -
If resource is for a sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout
. -
Set entry.
sampler
to samplerLayout.
-
-
If resource is for a comparison sampler binding:
-
Let samplerLayout be a new
GPUSamplerBindingLayout
. -
Set samplerLayout.
type
to"comparison"
. -
Set entry.
sampler
to samplerLayout.
-
-
If resource is for a buffer binding:
-
Let bufferLayout be a new
GPUBufferBindingLayout
. -
Set bufferLayout.
minBindingSize
to resource’s minimum buffer binding size. -
If resource is for a read-only storage buffer:
-
Set bufferLayout.
type
to"read-only-storage"
.
-
-
If resource is for a storage buffer:
-
Set bufferLayout.
type
to"storage"
.
-
-
Set entry.
buffer
to bufferLayout.
-
-
If resource is for a sampled texture binding:
-
Let textureLayout be a new
GPUTextureBindingLayout
. -
If resource is a depth texture binding:
-
Set textureLayout.
sampleType
to"depth"
Else if the sampled type of resource is:
f32
and there exists a static use of resource by stageDesc with atextureSample*
builtin-
Set textureLayout.
sampleType
to"float"
f32
otherwise-
Set textureLayout.
sampleType
to"unfilterable-float"
i32
-
Set textureLayout.
sampleType
to"sint"
u32
-
Set textureLayout.
sampleType
to"uint"
-
-
Set textureLayout.
viewDimension
to resource’s dimension. -
If resource is for a multisampled texture:
-
Set textureLayout.
multisampled
totrue
.
-
-
Set entry.
texture
to textureLayout.
-
-
If resource is for a storage texture binding:
-
Let storageTextureLayout be a new
GPUStorageTextureBindingLayout
. -
Set storageTextureLayout.
format
to resource’s format. -
Set storageTextureLayout.
viewDimension
to resource’s dimension. -
If the access mode is:
read
-
Set textureLayout.
access
to"read-only"
. write
-
Set textureLayout.
access
to"write-only"
. read_write
-
Set textureLayout.
access
to"read-write"
.
-
Set entry.
storageTexture
to storageTextureLayout.
-
-
Set groupCount to max(groupCount, group + 1).
-
If groupDescs[group] has an entry previousEntry with
binding
equal to binding:-
If entry has different
visibility
than previousEntry:-
Add the bits set in entry.
visibility
into previousEntry.visibility
-
-
If resource is for a buffer binding and entry has greater
buffer
.minBindingSize
than previousEntry:-
Set previousEntry.
buffer
.minBindingSize
to entry.buffer
.minBindingSize
.
-
-
If resource is a sampled texture binding and entry has different
texture
.sampleType
than previousEntry and both entry and previousEntry havetexture
.sampleType
of either"float"
or"unfilterable-float"
:-
Set previousEntry.
texture
.sampleType
to"float"
.
-
-
If any other property is unequal between entry and previousEntry:
-
Return
null
(which will cause the creation of the pipeline to fail).
-
-
If resource is a storage texture binding, entry.storageTexture.
access
is"read-write"
, previousEntry.storageTexture.access
is"write-only"
, and previousEntry.storageTexture.format
is compatible withSTORAGE_BINDING
and"read-write"
according to the § 26.1.1 Plain color formats table:-
Set previousEntry.storageTexture.
access
to"read-write"
.
-
-
-
Else
-
Append entry to groupDescs[group].
-
-
-
-
Let groupLayouts be a new list.
-
For each i from 0 to groupCount - 1, inclusive:
-
Let groupDesc be groupDescs[i].
-
Let bindGroupLayout be the result of calling device.
createBindGroupLayout()
(groupDesc). -
Set bindGroupLayout.
[[exclusivePipeline]]
to pipeline. -
Append bindGroupLayout to groupLayouts.
-
-
Let desc be a new
GPUPipelineLayoutDescriptor
. -
Set desc.
bindGroupLayouts
to groupLayouts. -
Return device.
createPipelineLayout()
(desc).
10.1.2. GPUProgrammableStage
A GPUProgrammableStage
describes the entry point in the user-provided GPUShaderModule
that controls one of the programmable stages of a pipeline.Entry point names follow the rules defined in WGSL identifier comparison.
dictionary GPUProgrammableStage {required GPUShaderModule module;USVString entryPoint;record <USVString , GPUPipelineConstantValue> constants;};typedef double GPUPipelineConstantValue; // May represent WGSL's bool, f32, i32, u32, and f16 if enabled.
GPUProgrammableStage
has the following members:
module
, of type GPUShaderModule-
The
GPUShaderModule
containing the code that this programmable stage will execute. entryPoint
, of type USVString-
The name of the function in
module
that this stage will use toperform its work.NOTE: Since the
entryPoint
dictionary member isnot required, the consumer of aGPUProgrammableStage
must use the"get the entry point" algorithm to determine which entry pointit refers to. constants
, of type record<USVString, GPUPipelineConstantValue>-
Specifies the values of pipeline-overridable constants in the shader module
module
.Each such pipeline-overridable constant is uniquely identified by a single pipeline-overridable constant identifier string, representing the pipelineconstant ID of the constant if its declaration specifies one, and otherwise theconstant’s identifier name.
The key of each key-value pair must equal the identifier string of one such constant, with the comparison performedaccording to the rules for WGSL identifier comparison.When the pipeline is executed, that constant will have the specified value.
Values are specified as
GPUPipelineConstantValue
, which is adouble
.They are converted to WGSL type of the pipeline-overridable constant (bool
/i32
/u32
/f32
/f16
).If conversion fails, a validation error is generated.Pipeline-overridable constants defined in WGSL:
@id ( 0 ) override has_point_light : bool= true ; // Algorithmic control. @id ( 1200 ) override specular_param : f32= 2.3 ; // Numeric control. @id ( 1300 ) override gain : f32; // Must be overridden. override width : f32= 0.0 ; // Specifed at the API level // using the name "width". override depth : f32; // Specifed at the API level // using the name "depth". // Must be overridden. override height = 2 * depth ; // The default value // (if not set at the API level), // depends on another // overridable constant. Corresponding JavaScript code, providing only the overrides which are required (have no defaults):
{ // ... constants: { 1300 : 2.0 , // "gain" depth: - 1 , // "depth" } } Corresponding JavaScript code, overriding all constants:
{ // ... constants: { 0 : false , // "has_point_light" 1200 : 3.0 , // "specular_param" 1300 : 2.0 , // "gain" width: 20 , // "width" depth: - 1 , // "depth" height: 15 , // "height" } }
To get the entry point(GPUShaderStage
stage, GPUProgrammableStage
descriptor)
-
If descriptor.
entryPoint
is provided:-
If descriptor.
module
contains an entry pointwhose name equals descriptor.entryPoint
,and whose shader stage equals stage,return that entry point. -
Otherwise, return
null
.
-
-
Otherwise:
-
If there is exactly one entry point in descriptor.
module
whose shader stage equals stage, return that entry point. -
Otherwise, return
null
.
-
validating GPUProgrammableStage(stage, descriptor, layout)
Arguments:
-
GPUShaderStage
stage -
GPUProgrammableStage
descriptor -
GPUPipelineLayout
layout
Return true
if all requirements in the following steps are satisfied, and false
otherwise:
-
descriptor.
module
must be a validGPUShaderModule
. -
Let entryPoint be get the entry point(stage, descriptor).
-
entryPoint must not be
null
. -
For each binding that is statically used by entryPoint:
-
validating shader binding(binding, layout) must return
true
.
-
-
For each texture and sampler statically used together by entryPoint in texture sampling calls:
-
Let texture be the
GPUBindGroupLayoutEntry
corresponding to the sampled texture in the call. -
Let sampler be the
GPUBindGroupLayoutEntry
corresponding to the used sampler in the call. -
If sampler.
type
is"filtering"
,then texture.sampleType
must be"float"
.
Note:
"comparison"
samplers can also only be used with"depth"
textures, because they are the only texture type that canbe bound to WGSLtexture_depth_*
bindings. -
-
For each key → value in descriptor.
constants
:-
key must equal the pipeline-overridable constant identifier string ofsome pipeline-overridable constant defined in the shader module descriptor.
module
by the rules defined in WGSL identifier comparison.Let the type of that constant be T. -
Converting the IDL value value to WGSL type T must not throw a
TypeError
.
-
-
For each pipeline-overridable constant identifier string key which is statically used by entryPoint:
-
If the pipeline-overridable constant identified by key does not have a default value, descriptor.
constants
must contain key.
-
-
Pipeline-creation program errors must notresult from the rules of the [WGSL] specification.
validating shader binding(variable, layout)
Arguments:
-
shader binding declaration variable, a module-scope variable declaration reflected from a shader module
-
GPUPipelineLayout
layout
Let bindGroup be the bind group index, and bindIndex be the binding index, of the shader binding declaration variable.
Return true
if all of the following conditions are satisfied:
-
layout.
[[bindGroupLayouts]]
[bindGroup] containsaGPUBindGroupLayoutEntry
entry whose entry.binding
== bindIndex. -
If the defined binding member for entry is:
buffer
-
If entry.
buffer
.type
is:"uniform"
-
variable is declared with address space
uniform
. "storage"
-
variable is declared with address space
storage
and access moderead_write
. "read-only-storage"
-
variable is declared with address space
storage
and access moderead
.
If entry.
buffer
.minBindingSize
is not0
,then it must be at least the minimum buffer binding size for the associatedbuffer binding variable in the shader. sampler
-
If entry.
sampler
.type
is:"filtering"
or"non-filtering"
-
variable has type
sampler
. "comparison"
-
variable has type
sampler_comparison
.
texture
-
If, and only if, entry.
texture
.multisampled
istrue
, variable has typetexture_multisampled_2d<T>
ortexture_depth_multisampled_2d<T>
.If entry.
texture
.sampleType
is:"float"
,"unfilterable-float"
,"sint"
or"uint"
-
variable has one of the types:
-
texture_1d<T>
-
texture_2d<T>
-
texture_2d_array<T>
-
texture_cube<T>
-
texture_cube_array<T>
-
texture_3d<T>
-
texture_multisampled_2d<T>
If entry.
texture
.sampleType
is:"float"
or"unfilterable-float"
-
The sampled type
T
isf32
. "sint"
-
The sampled type
T
isi32
. "uint"
-
The sampled type
T
isu32
.
-
"depth"
-
variable has one of the types:
-
texture_2d<T>
-
texture_2d_array<T>
-
texture_cube<T>
-
texture_cube_array<T>
-
texture_multisampled_2d<T>
-
texture_depth_2d
-
texture_depth_2d_array
-
texture_depth_cube
-
texture_depth_cube_array
-
texture_depth_multisampled_2d
where the sampled type
T
isf32
. -
If entry.
texture
.viewDimension
is:"1d"
-
variable has type
texture_1d<T>
. "2d"
-
variable has type
texture_2d<T>
ortexture_multisampled_2d<T>
. "2d-array"
-
variable has type
texture_2d_array<T>
. "cube"
-
variable has type
texture_cube<T>
. "cube-array"
-
variable has type
texture_cube_array<T>
. "3d"
-
variable has type
texture_3d<T>
.
storageTexture
-
If entry.
storageTexture
.viewDimension
is:"1d"
-
variable has type
texture_storage_1d<T, A>
. "2d"
-
variable has type
texture_storage_2d<T, A>
. "2d-array"
-
variable has type
texture_storage_2d_array<T, A>
. "3d"
-
variable has type
texture_storage_3d<T, A>
.
If entry.
storageTexture
.access
is:"write-only"
-
The access mode
A
iswrite
. "read-only"
-
The access mode
A
isread
. "read-write"
-
The access mode
A
isread_write
orwrite
.
The texel format
T
equals entry.storageTexture
.format
.
The minimum buffer binding size for a buffer binding variable var is computed as follows:
-
Let T be the store type of var.
-
If T is a runtime-sized array, or contains a runtime-sized array, replacethat
array<E>
witharray<E, 1>
.Note: This ensures there’s always enough memory for one element, which allows arrayindices to be clamped to the length of the array resulting in an in-memory access.
-
Return SizeOf(T).
Note: Enforcing this lower bound ensures reads and writes via the buffer variable only access memory locations within the bound region of the buffer.
A resource binding, pipeline-overridable constant, shader stage input, or shader stage output is considered to be statically used by an entry point if it is present in the interface of the shader stage for that entry point.
10.2. GPUComputePipeline
A GPUComputePipeline
is a kind of pipeline that controls the compute shader stage,and can be used in GPUComputePassEncoder
.
Compute inputs and outputs are all contained in the bindings,according to the given GPUPipelineLayout
.The outputs correspond to buffer
bindings with a type of "storage"
and storageTexture
bindings with a type of "write-only"
or "read-write"
.
Stages of a compute pipeline:
-
Compute shader
[Exposed =(Window ,Worker ),SecureContext ]interface GPUComputePipeline {};GPUComputePipelineincludes GPUObjectBase;GPUComputePipelineincludes GPUPipelineBase;
10.2.1. Compute Pipeline Creation
A GPUComputePipelineDescriptor
describes a compute pipeline. See § 23.2 Computing for additional details.
dictionary : GPUPipelineDescriptorBase {
GPUComputePipelineDescriptor required GPUProgrammableStage compute;};
GPUComputePipelineDescriptor
has the following members:
compute
, of type GPUProgrammableStage-
Describes the compute shader entry point of the pipeline.
createComputePipeline(descriptor)
-
Creates a
GPUComputePipeline
using immediate pipeline creation.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createComputePipeline(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePipelineDescriptor
✘ ✘ Description of the GPUComputePipeline
to create.Returns:
GPUComputePipeline
Content timeline steps:
-
Let pipeline be a new
GPUComputePipeline
object. -
Issue the initialization steps on the Device timeline of this.
-
Return pipeline.
Device timeline initialization steps:
-
Let layout be a new default pipeline layout for pipeline if descriptor.
layout
is"auto"
,and descriptor.layout
otherwise. -
If any of the requirements in the following steps are unsatisfied, generate a validation error, make pipeline invalid, and stop.
-
layout must be valid to use with this.
-
validating GPUProgrammableStage(
COMPUTE
, descriptor.compute
, layout) must succeed. -
Let entryPoint be get the entry point(
COMPUTE
, descriptor.compute
). Assert entryPoint is notnull
. -
Let workgroupStorageUsed be the sum of roundUp(16, SizeOf(T)) over eachtype T of all variables with address space "workgroup" statically used by entryPoint.
workgroupStorageUsed must be ≤ device.limits.
maxComputeWorkgroupStorageSize
. -
entryPoint must use ≤ device.limits.
maxComputeInvocationsPerWorkgroup
perworkgroup. -
Each component of entryPoint’s
workgroup_size
attribute must be ≤ the corresponding component in[device.limits.maxComputeWorkgroupSizeX
, device.limits.maxComputeWorkgroupSizeY
, device.limits.maxComputeWorkgroupSizeZ
].
-
-
If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, make pipeline invalid, and stop.
Note: Even if the implementation detected uncategorized errors in shader modulecreation, the error is surfaced here.
-
Set pipeline.
[[layout]]
to layout.
-
createComputePipelineAsync(descriptor)
-
Creates a
GPUComputePipeline
using async pipeline creation.The returnedPromise
resolves when the created pipelineis ready to be used without additional delay.If pipeline creation fails, the returned
Promise
rejects with anGPUPipelineError
.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createComputePipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePipelineDescriptor
✘ ✘ Description of the GPUComputePipeline
to create.Returns:
Promise
<GPUComputePipeline
>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:
-
Let pipeline be a new
GPUComputePipeline
created as if this.createComputePipeline()
was called with descriptor; -
When pipeline is ready to be used or has been made invalid, issue thesubsequent steps on contentTimeline.
-
Creating a simple GPUComputePipeline
:
const computePipeline= gpuDevice. createComputePipeline({ layout: pipelineLayout, compute: { module: computeShaderModule, entryPoint: 'computeMain' , } });
10.3. GPURenderPipeline
A GPURenderPipeline
is a kind of pipeline that controls the vertexand fragment shader stages, and can be used in GPURenderPassEncoder
as well as GPURenderBundleEncoder
.
Render pipeline inputs are:
-
bindings, according to the given
GPUPipelineLayout
-
vertex and index buffers, described by
GPUVertexState
-
the color attachments, described by
GPUColorTargetState
-
optionally, the depth-stencil attachment, described by
GPUDepthStencilState
Render pipeline outputs are:
-
buffer
bindings with atype
of"storage"
-
storageTexture
bindings with aaccess
of"write-only"
or"read-write"
-
the color attachments, described by
GPUColorTargetState
-
optionally, depth-stencil attachment, described by
GPUDepthStencilState
A render pipeline is comprised of the following render stages:
-
Vertex fetch, controlled by
GPUVertexState.buffers
-
Vertex shader, controlled by
GPUVertexState
-
Primitive assembly, controlled by
GPUPrimitiveState
-
Rasterization, controlled by
GPUPrimitiveState
,GPUDepthStencilState
, andGPUMultisampleState
-
Fragment shader, controlled by
GPUFragmentState
-
Stencil test and operation, controlled by
GPUDepthStencilState
-
Depth test and write, controlled by
GPUDepthStencilState
-
Output merging, controlled by
GPUFragmentState.targets
[Exposed =(Window ,Worker ),SecureContext ]interface GPURenderPipeline {};GPURenderPipelineincludes GPUObjectBase;GPURenderPipelineincludes GPUPipelineBase;
GPURenderPipeline
has the following internal slots:
[[descriptor]]
, of typeGPURenderPipelineDescriptor
-
The
GPURenderPipelineDescriptor
describing this pipeline.All optional fields of
GPURenderPipelineDescriptor
are defined. [[writesDepth]]
, of type boolean-
True if the pipeline writes to the depth component of the depth/stencil attachment
[[writesStencil]]
, of type boolean-
True if the pipeline writes to the stencil component of the depth/stencil attachment
10.3.1. Render Pipeline Creation
A GPURenderPipelineDescriptor
describes a render pipeline by configuring eachof the render stages. See § 23.3 Rendering for additional details.
dictionary : GPUPipelineDescriptorBase {
GPURenderPipelineDescriptor required GPUVertexState vertex; GPUPrimitiveState primitive = {}; GPUDepthStencilState depthStencil; GPUMultisampleState multisample = {}; GPUFragmentState fragment;};
GPURenderPipelineDescriptor
has the following members:
vertex
, of type GPUVertexState-
Describes the vertex shader entry point of the pipeline and its input buffer layouts.
primitive
, of type GPUPrimitiveState, defaulting to{}
-
Describes the primitive-related properties of the pipeline.
depthStencil
, of type GPUDepthStencilState-
Describes the optional depth-stencil properties, including the testing, operations, and bias.
multisample
, of type GPUMultisampleState, defaulting to{}
-
Describes the multi-sampling properties of the pipeline.
fragment
, of type GPUFragmentState-
Describes the fragment shader entry point of the pipeline and its output colors. Ifnot provided, the § 23.3.8 No Color Output mode is enabled.
createRenderPipeline(descriptor)
-
Creates a
GPURenderPipeline
using immediate pipeline creation.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderPipeline(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPipelineDescriptor
✘ ✘ Description of the GPURenderPipeline
to create.Returns:
GPURenderPipeline
Content timeline steps:
-
If descriptor.
fragment
is provided: -
If descriptor.
depthStencil
is provided:-
? Validate texture format required features of descriptor.
depthStencil
.format
with this.[[device]]
.
-
-
Let pipeline be a new
GPURenderPipeline
object. -
Issue the initialization steps on the Device timeline of this.
-
Return pipeline.
Device timeline initialization steps:
-
Let layout be a new default pipeline layout for pipeline if descriptor.
layout
is"auto"
,and descriptor.layout
otherwise. -
If any of the following conditions are unsatisfied: generate a validation error, make pipeline invalid, and stop.
-
layout is valid to use with this.
-
validating GPURenderPipelineDescriptor(descriptor, layout, this) succeeds.
-
layout.
[[bindGroupLayouts]]
.length + vertexBufferCount is ≤ this.[[device]]
.[[limits]]
.maxBindGroupsPlusVertexBuffers
,where vertexBufferCount is the maximum index in descriptor.vertex
.buffers
that is notundefined
.
-
-
If any pipeline-creation uncategorized errors result from the implementation of pipeline creation, generate an internal error, make pipeline invalid, and stop.
Note: Even if the implementation detected uncategorized errors in shader modulecreation, the error is surfaced here.
-
Set pipeline.
[[descriptor]]
to descriptor. -
Set pipeline.
[[writesDepth]]
to false. -
Set pipeline.
[[writesStencil]]
to false. -
Let depthStencil be descriptor.
depthStencil
. -
If depthStencil is not null:
-
Set pipeline.
[[writesDepth]]
to depthStencil.depthWriteEnabled
. -
If depthStencil.
stencilWriteMask
is not 0:-
Let stencilFront be depthStencil.
stencilFront
. -
Let stencilBack be depthStencil.
stencilBack
. -
Let cullMode be descriptor.
primitive
.cullMode
. -
If cullMode is not
"front"
, and any of stencilFront.passOp
, stencilFront.depthFailOp
, or stencilFront.failOp
is not"keep"
:-
Set pipeline.
[[writesStencil]]
to true.
-
-
If cullMode is not
"back"
, and any of stencilBack.passOp
, stencilBack.depthFailOp
, or stencilBack.failOp
is not"keep"
:-
Set pipeline.
[[writesStencil]]
to true.
-
-
-
-
Set pipeline.
[[layout]]
to layout.
need description of the render states.
-
createRenderPipelineAsync(descriptor)
-
Creates a
GPURenderPipeline
using async pipeline creation.The returnedPromise
resolves when the created pipelineis ready to be used without additional delay.If pipeline creation fails, the returned
Promise
rejects with anGPUPipelineError
.Note: Use of this method is preferred whenever possible, as it prevents blocking the queue timeline work on pipeline compilation.
Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderPipelineAsync(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPipelineDescriptor
✘ ✘ Description of the GPURenderPipeline
to create.Returns:
Promise
<GPURenderPipeline
>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the initialization steps on the Device timeline of this.
-
Return promise.
Device timeline initialization steps:
-
Let pipeline be a new
GPURenderPipeline
created as if this.createRenderPipeline()
was called with descriptor; -
When pipeline is ready to be used or has been made invalid, issue thesubsequent steps on contentTimeline.
-
validating GPURenderPipelineDescriptor(descriptor, layout, device)
Arguments:
-
GPURenderPipelineDescriptor
descriptor -
GPUPipelineLayout
layout -
GPUDevice
device
Return true
if all of the following conditions are satisfied:
-
validating GPUVertexState(device, descriptor.
vertex
, layout) succeeds. -
If descriptor.
fragment
is provided:-
validating GPUFragmentState(device, descriptor.
fragment
, layout) succeeds. -
If the sample_mask builtin is a shader stage output of descriptor.
fragment
:-
descriptor.
multisample
.alphaToCoverageEnabled
isfalse
.
-
-
If the frag_depth builtin is a shader stage output of descriptor.
fragment
:-
descriptor.
depthStencil
must be provided, and descriptor.depthStencil
.format
must have a depth aspect.
-
-
-
validating GPUPrimitiveState(descriptor.
primitive
, device) succeeds. -
If descriptor.
depthStencil
is provided:-
validating GPUDepthStencilState(descriptor.
depthStencil
) succeeds.
-
-
validating GPUMultisampleState(descriptor.
multisample
) succeeds. -
If descriptor.
multisample
.alphaToCoverageEnabled
is true: -
There must exist at least one attachment, either:
-
A non-
null
value in descriptor.fragment
.targets
, or -
A descriptor.
depthStencil
.
-
-
validating inter-stage interfaces(device, descriptor) returns
true
.
validating inter-stage interfaces(device, descriptor)
Arguments:
-
GPUDevice
device -
GPURenderPipelineDescriptor
descriptor
Returns: boolean
-
Let maxVertexShaderOutputComponents be device.limits.
maxInterStageShaderComponents
.-
If descriptor.
primitive
.topology
is"point-list"
:-
Decrement maxVertexShaderOutputComponents by 1.
-
-
-
Return
false
if any of the following requirements are unmet:-
There must be no more than maxVertexShaderOutputComponents scalarcomponents across all user-defined outputs for descriptor.
vertex
.Each user-defined output of descriptor.vertex
consumes 4 scalar components. -
The location of each user-defined output of descriptor.
vertex
must be< device.limits.maxInterStageShaderVariables
.
-
-
If descriptor.
fragment
is provided:-
Let maxFragmentShaderInputComponents be device.limits.
maxInterStageShaderComponents
.-
If the
front_facing
builtin is an input of descriptor.fragment
:-
Decrement maxFragmentShaderInputComponents by 1.
-
-
If the
sample_index
builtin is an input of descriptor.fragment
:-
Decrement maxFragmentShaderInputComponents by 1.
-
-
If the
sample_mask
builtin is an input of descriptor.fragment
:-
Decrement maxFragmentShaderInputComponents by 1.
-
-
-
Return
false
if any of the following requirements are unmet:-
There must be no more than maxFragmentShaderInputComponents scalarcomponents across all user-defined inputs for descriptor.
fragment
.Each user-defined input of descriptor.fragment
consumes 4 scalar components. -
For each user-defined input of descriptor.
fragment
theremust be a user-defined output of descriptor.vertex
that location, type, and interpolation of the input.Note: Vertex-only pipelines can have user-defined outputs in the vertex stage;their values will be discarded.
-
-
Assert that the location of each user-defined input of descriptor.
fragment
is lessthan device.limits.maxInterStageShaderVariables
(resulting from the above rules).
-
-
Return
true
.
Creating a simple GPURenderPipeline
:
const renderPipeline= gpuDevice. createRenderPipeline({ layout: pipelineLayout, vertex: { module: shaderModule, entryPoint: 'vertexMain' }, fragment: { module: shaderModule, entryPoint: 'fragmentMain' , targets: [{ format: 'bgra8unorm' , }], } });
10.3.2. Primitive State
dictionary { GPUPrimitiveTopology topology = "triangle-list"; GPUIndexFormat stripIndexFormat; GPUFrontFace frontFace = "ccw"; GPUCullMode cullMode = "none"; // Requires "depth-clip-control" feature.
GPUPrimitiveState boolean unclippedDepth =false ;};
GPUPrimitiveState
has the following members, which describe how a GPURenderPipeline
constructs and rasterizes primitives from its vertex inputs:
topology
, of type GPUPrimitiveTopology, defaulting to"triangle-list"
-
The type of primitive to be constructed from the vertex inputs.
stripIndexFormat
, of type GPUIndexFormat-
For pipelines with strip topologies(
"line-strip"
or"triangle-strip"
),this determines the index buffer format and primitive restart value("uint16"
/0xFFFF
or"uint32"
/0xFFFFFFFF
).It is not allowed on pipelines with non-strip topologies.Note: Some implementations require knowledge of the primitive restart value to compilepipeline state objects.
To use a strip-topology pipeline with an indexed draw call(
drawIndexed()
ordrawIndexedIndirect()
),this must be set, and it must match the index buffer format used with the draw call(set insetIndexBuffer()
).See § 23.3.3 Primitive Assembly for additional details.
frontFace
, of type GPUFrontFace, defaulting to"ccw"
-
Defines which polygons are considered front-facing.
cullMode
, of type GPUCullMode, defaulting to"none"
-
Defines which polygon orientation will be culled, if any.
unclippedDepth
, of type boolean, defaulting tofalse
-
If true, indicates that depth clipping is disabled.
Requires the
"depth-clip-control"
feature to be enabled.
validating GPUPrimitiveState(descriptor, device) Arguments:
-
GPUPrimitiveState
descriptor -
GPUDevice
device
Return true
if all of the following conditions are satisfied:
-
If descriptor.
topology
is not"line-strip"
or"triangle-strip"
:-
descriptor.
stripIndexFormat
must not be provided.
-
-
If descriptor.
unclippedDepth
istrue
:-
"depth-clip-control"
must be enabled for device.
-
enum { "point-list", "line-list", "line-strip", "triangle-list", "triangle-strip",};
GPUPrimitiveTopology
GPUPrimitiveTopology
defines the primitive type draw calls made with a GPURenderPipeline
will use. See § 23.3.5 Rasterization for additional details:
"point-list"
-
Each vertex defines a point primitive.
"line-list"
-
Each consecutive pair of two vertices defines a line primitive.
"line-strip"
-
Each vertex after the first defines a line primitive between it and the previous vertex.
"triangle-list"
-
Each consecutive triplet of three vertices defines a triangle primitive.
"triangle-strip"
-
Each vertex after the first two defines a triangle primitive between it and the previoustwo vertices.
enum { "ccw", "cw",};
GPUFrontFace
GPUFrontFace
defines which polygons are considered front-facing by a GPURenderPipeline
.See § 23.3.5.4 Polygon Rasterization for additional details:
"ccw"
-
Polygons with vertices whose framebuffer coordinates are given in counter-clockwise orderare considered front-facing.
"cw"
-
Polygons with vertices whose framebuffer coordinates are given in clockwise order areconsidered front-facing.
enum { "none", "front", "back",};
GPUCullMode
GPUPrimitiveTopology
defines which polygons will be culled by draw calls made with a GPURenderPipeline
. See § 23.3.5.4 Polygon Rasterization for additional details:
"none"
-
No polygons are discarded.
"front"
-
Front-facing polygons are discarded.
"back"
-
Back-facing polygons are discarded.
Note: GPUFrontFace
and GPUCullMode
have no effect on "point-list"
, "line-list"
, or "line-strip"
topologies.
10.3.3. Multisample State
dictionary { GPUSize32 count = 1; GPUSampleMask mask = 0xFFFFFFFF;
GPUMultisampleState boolean alphaToCoverageEnabled =false ;};
GPUMultisampleState
has the following members, which describe how a GPURenderPipeline
interacts with a render pass’s multisampled attachments.
count
, of type GPUSize32, defaulting to1
-
Number of samples per pixel. This
GPURenderPipeline
will be compatible onlywith attachment textures (colorAttachments
anddepthStencilAttachment
)with matchingsampleCount
s. mask
, of type GPUSampleMask, defaulting to0xFFFFFFFF
-
Mask determining which samples are written to.
alphaToCoverageEnabled
, of type boolean, defaulting tofalse
-
When
true
indicates that a fragment’s alpha channel should be used to generate a samplecoverage mask.
validating GPUMultisampleState(descriptor) Arguments:
-
GPUMultisampleState
descriptor
Return true
if all of the following conditions are satisfied:
-
If descriptor.
alphaToCoverageEnabled
istrue
:-
descriptor.
count
> 1.
-
10.3.4. Fragment State
dictionary : GPUProgrammableStage {
GPUFragmentState required sequence <GPUColorTargetState?> targets;};
targets
, of typesequence<GPUColorTargetState?>
-
A list of
GPUColorTargetState
defining the formats and behaviors of the color targetsthis pipeline writes to.
validating GPUFragmentState(device, descriptor, layout)
Arguments:
-
GPUDevice
device -
GPUFragmentState
descriptor -
GPUPipelineLayout
layout
Return true
if all of the following requirements are met:
-
validating GPUProgrammableStage(
FRAGMENT
, descriptor, layout) succeeds. -
descriptor.
targets
.length must be ≤ device.[[limits]]
.maxColorAttachments
. -
For each index of the indices of descriptor.
targets
containing a non-null
value colorState:-
colorState.
format
must be listed in § 26.1.1 Plain color formats withRENDER_ATTACHMENT
capability. -
If colorState.
blend
is provided:-
The colorState.
format
must be blendable. -
colorState.
blend
.color
must be a valid GPUBlendComponent. -
colorState.
blend
.alpha
must be a valid GPUBlendComponent.
-
-
colorState.
writeMask
must be < 16. -
If get the entry point(
FRAGMENT
, descriptor) has a shader stage output value output with location attribute equal to index:-
For each component in colorState.
format
, there must be acorresponding component in output.(That is, RGBA requires vec4, RGB requires vec3 or vec4, RG requires vec2 or vec3 or vec4.) -
If the
GPUTextureSampleType
s for colorState.format
(defined in § 26.1 Texture Format Capabilities) are:"float"
and/or"unfilterable-float"
-
output must have a floating-point scalar type.
"sint"
-
output must have a signed integer scalar type.
"uint"
-
output must have an unsigned integer scalar type.
-
If colorState.
blend
is provided and colorState.blend
.color
.srcFactor
or .dstFactor
uses the source alpha(is any of"src-alpha"
,"one-minus-src-alpha"
,or"src-alpha-saturated"
), then:-
output must have an alpha channel (that is, it must be a vec4).
-
Otherwise, since there is no shader output for the attachment:
-
colorState.
writeMask
must be 0.
-
-
-
Validating GPUFragmentState’s color attachment bytes per sample(device, descriptor.
targets
) succeeds.
Validating GPUFragmentState’s color attachment bytes per sample(GPUDevice
device, sequence<GPUColorTargetState
?> targets)
Note: The fragment shader may output more values than what the pipeline uses. If that is the casethe values are ignored.
component is a valid GPUBlendComponent if it meets the following requirements:
-
If component.
operation
is"min"
or"max"
:-
component.
srcFactor
and component.dstFactor
must both be"one"
.
-
10.3.5. Color Target State
dictionary {
GPUColorTargetState required GPUTextureFormat format; GPUBlendState blend; GPUColorWriteFlags writeMask = 0xF; // GPUColorWrite.ALL};
format
, of type GPUTextureFormat-
The
GPUTextureFormat
of this color target. The pipeline will only be compatible withGPURenderPassEncoder
s which use aGPUTextureView
of this format in thecorresponding color attachment. blend
, of type GPUBlendState-
The blending behavior for this color target. If left undefined, disables blending for thiscolor target.
writeMask
, of type GPUColorWriteFlags, defaulting to0xF
-
Bitmask controlling which channels are are written to when drawing to this color target.
dictionary {
GPUBlendState required GPUBlendComponent color;required GPUBlendComponent alpha;};
color
, of type GPUBlendComponent-
Defines the blending behavior of the corresponding render target for color channels.
alpha
, of type GPUBlendComponent-
Defines the blending behavior of the corresponding render target for the alpha channel.
typedef [EnforceRange ]unsigned long ;[
GPUColorWriteFlags Exposed =(Window ,Worker ),SecureContext ]namespace {
GPUColorWrite const GPUFlagsConstant= 0x1;
RED const GPUFlagsConstant= 0x2;
GREEN const GPUFlagsConstant= 0x4;
BLUE const GPUFlagsConstant= 0x8;
ALPHA const GPUFlagsConstant= 0xF;};
ALL
10.3.5.1. Blend State
dictionary { GPUBlendOperation operation = "add"; GPUBlendFactor srcFactor = "one"; GPUBlendFactor dstFactor = "zero";};
GPUBlendComponent
GPUBlendComponent
has the following members, which describe how the color or alpha componentsof a fragment are blended:
operation
, of type GPUBlendOperation, defaulting to"add"
-
Defines the
GPUBlendOperation
used to calculate the values written to the targetattachment components. srcFactor
, of type GPUBlendFactor, defaulting to"one"
-
Defines the
GPUBlendFactor
operation to be performed on values from the fragment shader. dstFactor
, of type GPUBlendFactor, defaulting to"zero"
-
Defines the
GPUBlendFactor
operation to be performed on values from the target attachment.
The following tables use this notation to describe color components for a given fragmentlocation:
RGBAsrc | Color output by the fragment shader for the color attachment. If the shader doesn’t return an alpha channel, src-alpha blend factors cannot be used. |
RGBAdst | Color currently in the color attachment. Missing green/blue/alpha channels default to 0, 0, 1 , respectively. |
RGBAconst | The current [[blendConstant]] . |
RGBAsrcFactor | The source blend factor components, as defined by srcFactor . |
RGBAdstFactor | The destination blend factor components, as defined by dstFactor . |
enum { "zero", "one", "src", "one-minus-src", "src-alpha", "one-minus-src-alpha", "dst", "one-minus-dst", "dst-alpha", "one-minus-dst-alpha", "src-alpha-saturated", "constant", "one-minus-constant",};
GPUBlendFactor
GPUBlendFactor
defines how either a source or destination blend factors is calculated:
GPUBlendFactor | Blend factor RGBA components |
---|---|
"zero" | (0, 0, 0, 0) |
"one" | (1, 1, 1, 1) |
"src" | (Rsrc, Gsrc, Bsrc, Asrc) |
"one-minus-src" | (1 - Rsrc, 1 - Gsrc, 1 - Bsrc, 1 - Asrc) |
"src-alpha" | (Asrc, Asrc, Asrc, Asrc) |
"one-minus-src-alpha" | (1 - Asrc, 1 - Asrc, 1 - Asrc, 1 - Asrc) |
"dst" | (Rdst, Gdst, Bdst, Adst) |
"one-minus-dst" | (1 - Rdst, 1 - Gdst, 1 - Bdst, 1 - Adst) |
"dst-alpha" | (Adst, Adst, Adst, Adst) |
"one-minus-dst-alpha" | (1 - Adst, 1 - Adst, 1 - Adst, 1 - Adst) |
"src-alpha-saturated" | (min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), min(Asrc, 1 - Adst), 1) |
"constant" | (Rconst, Gconst, Bconst, Aconst) |
"one-minus-constant" | (1 - Rconst, 1 - Gconst, 1 - Bconst, 1 - Aconst) |
enum { "add", "subtract", "reverse-subtract", "min", "max",};
GPUBlendOperation
GPUBlendOperation
defines the algorithm used to combine source and destination blend factors:
GPUBlendOperation | RGBA Components |
---|---|
"add" | RGBAsrc × RGBAsrcFactor + RGBAdst × RGBAdstFactor |
"subtract" | RGBAsrc × RGBAsrcFactor - RGBAdst × RGBAdstFactor |
"reverse-subtract" | RGBAdst × RGBAdstFactor - RGBAsrc × RGBAsrcFactor |
"min" | min(RGBAsrc, RGBAdst) |
"max" | max(RGBAsrc, RGBAdst) |
10.3.6. Depth/Stencil State
dictionary {
GPUDepthStencilState required GPUTextureFormat format;boolean depthWriteEnabled; GPUCompareFunction depthCompare; GPUStencilFaceState stencilFront = {}; GPUStencilFaceState stencilBack = {}; GPUStencilValue stencilReadMask = 0xFFFFFFFF; GPUStencilValue stencilWriteMask = 0xFFFFFFFF; GPUDepthBias depthBias = 0;float depthBiasSlopeScale = 0;float depthBiasClamp = 0;};
GPUDepthStencilState
has the following members, which describe how a GPURenderPipeline
will affect a render pass’s depthStencilAttachment
:
format
, of type GPUTextureFormat-
The
format
ofdepthStencilAttachment
thisGPURenderPipeline
will be compatible with. depthWriteEnabled
, of type boolean-
Indicates if this
GPURenderPipeline
can modifydepthStencilAttachment
depth values. depthCompare
, of type GPUCompareFunction-
The comparison operation used to test fragment depths against
depthStencilAttachment
depth values. stencilFront
, of type GPUStencilFaceState, defaulting to{}
-
Defines how stencil comparisons and operations are performed for front-facing primitives.
stencilBack
, of type GPUStencilFaceState, defaulting to{}
-
Defines how stencil comparisons and operations are performed for back-facing primitives.
stencilReadMask
, of type GPUStencilValue, defaulting to0xFFFFFFFF
-
Bitmask controlling which
depthStencilAttachment
stencil valuebits are read when performing stencil comparison tests. stencilWriteMask
, of type GPUStencilValue, defaulting to0xFFFFFFFF
-
Bitmask controlling which
depthStencilAttachment
stencil valuebits are written to when performing stencil operations. depthBias
, of type GPUDepthBias, defaulting to0
-
Constant depth bias added to each fragment. See biased fragment depth for details.
depthBiasSlopeScale
, of type float, defaulting to0
-
Depth bias that scales with the fragment’s slope. See biased fragment depth for details.
depthBiasClamp
, of type float, defaulting to0
-
The maximum depth bias of a fragment. See biased fragment depth for details.
The biased fragment depth for a fragment being written to depthStencilAttachment
attachment when drawing using GPUDepthStencilState
state is calculated by running the following steps:
-
Let format be attachment.
view
.format
. -
Let r be the minimum positive representable value >
0
in the format converted to a 32-bit float. -
Let maxDepthSlope be the maximum of the horizontal and vertical slopes of the fragment’s depth value.
-
If format is a unorm format:
-
Let bias be
(float)state.
.depthBias
* r + state.depthBiasSlopeScale
* maxDepthSlope
-
-
Otherwise, if format is a float format:
-
Let bias be
(float)state.
.depthBias
* 2^(exp(max depth in primitive) - r) + state.depthBiasSlopeScale
* maxDepthSlope
-
-
If state.
depthBiasClamp
>0
:-
Set bias to
min(state.
.depthBiasClamp
, bias)
-
-
Otherwise if state.
depthBiasClamp
<0
:-
Set bias to
max(state.
.depthBiasClamp
, bias)
-
-
If state.
depthBias
≠0
or state.depthBiasSlopeScale
≠0
:-
Set the fragment depth value to
fragment depth value + bias
-
validating GPUDepthStencilState(descriptor)
Arguments:
-
GPUDepthStencilState
descriptor
Return true
if, and only if, all of the following conditions are satisfied:
-
descriptor.
format
is a depth-or-stencil format. -
If descriptor.
depthWriteEnabled
istrue
or descriptor.depthCompare
is provided and not"always"
:-
descriptor.
format
must have a depth component.
-
-
If descriptor.
stencilFront
or descriptor.stencilBack
are not the default values:-
descriptor.
format
must have a stencil component.
-
-
If descriptor.
format
has a depth component:
dictionary { GPUCompareFunction compare = "always"; GPUStencilOperation failOp = "keep"; GPUStencilOperation depthFailOp = "keep"; GPUStencilOperation passOp = "keep";};
GPUStencilFaceState
GPUStencilFaceState
has the following members, which describe how stencil comparisons andoperations are performed:
compare
, of type GPUCompareFunction, defaulting to"always"
-
The
GPUCompareFunction
used when testing fragments againstdepthStencilAttachment
stencil values. failOp
, of type GPUStencilOperation, defaulting to"keep"
-
The
GPUStencilOperation
performed if the fragment stencil comparison test described bycompare
fails. depthFailOp
, of type GPUStencilOperation, defaulting to"keep"
-
The
GPUStencilOperation
performed if the fragment depth comparison described bydepthCompare
fails. passOp
, of type GPUStencilOperation, defaulting to"keep"
-
The
GPUStencilOperation
performed if the fragment stencil comparison test described bycompare
passes.
enum { "keep", "zero", "replace", "invert", "increment-clamp", "decrement-clamp", "increment-wrap", "decrement-wrap",};
GPUStencilOperation
GPUStencilOperation
defines the following operations:
"keep"
-
Keep the current stencil value.
"zero"
-
Set the stencil value to
0
. "replace"
-
Set the stencil value to
[[stencilReference]]
. "invert"
-
Bitwise-invert the current stencil value.
"increment-clamp"
-
Increments the current stencil value, clamping to the maximum representable value of the
depthStencilAttachment
's stencil aspect. "decrement-clamp"
-
Decrement the current stencil value, clamping to
0
. "increment-wrap"
-
Increments the current stencil value, wrapping to zero if the value exceeds the maximumrepresentable value of the
depthStencilAttachment
's stencilaspect. "decrement-wrap"
-
Decrement the current stencil value, wrapping to the maximum representable value of the
depthStencilAttachment
's stencil aspect if the value goes below0
.
10.3.7. Vertex State
enum { "uint16", "uint32",};
GPUIndexFormat
The index format determines both the data type of index values in a buffer and, when used withstrip primitive topologies ("line-strip"
or "triangle-strip"
) also specifies the primitive restart value. The primitive restart value indicates which index value indicates that a new primitiveshould be started rather than continuing to construct the triangle strip with the prior indexedvertices.
GPUPrimitiveState
s that specify a strip primitive topology must specify a stripIndexFormat
if they are used for indexed drawsso that the primitive restart value that will be used is known at pipelinecreation time. GPUPrimitiveState
s that specify a list primitivetopology will use the index format passed to setIndexBuffer()
when doing indexed rendering.
Index format | Byte size | Primitive restart value |
---|---|---|
"uint16" | 2 | 0xFFFF |
"uint32" | 4 | 0xFFFFFFFF |
10.3.7.1. Vertex Formats
The GPUVertexFormat
of a vertex attribute indicates how data from a vertex buffer willbe interpreted and exposed to the shader. The name of the format specifies the order of components,bits per component, and vertex data type for the component.
Each vertex data type can map to any WGSL scalar type of the same base type,regardless of the bits per component:
Vertex format prefix | Vertex data type | Compatible WGSL types |
---|---|---|
uint | unsigned int | u32 |
sint | signed int | i32 |
unorm | unsigned normalized | f16 , f32 |
snorm | signed normalized | |
float | floating point |
The multi-component formats specify the number of components after "x". Mismatches in the number ofcomponents between the vertex format and shader type are allowed, with components being eitherdropped or filled with default values to compensate.
A vertex attribute with a format of "unorm8x2"
and byte values [0x7F, 0xFF]
can be accessed in the shader with the following types:
Shader type | Shader value |
---|---|
f16 | 0.5h |
f32 | 0.5f |
vec2<f16> | vec2(0.5h, 1.0h) |
vec2<f32> | vec2(0.5f, 1.0f) |
vec3<f16> | vec2(0.5h, 1.0h, 0.0h) |
vec3<f32> | vec2(0.5f, 1.0f, 0.0f) |
vec4<f16> | vec2(0.5h, 1.0h, 0.0h, 1.0h) |
vec4<f32> | vec2(0.5f, 1.0f, 0.0f, 1.0f) |
See § 23.3.2 Vertex Processing for additional information about how vertex formats are exposed in theshader.
enum { "uint8x2", "uint8x4", "sint8x2", "sint8x4", "unorm8x2", "unorm8x4", "snorm8x2", "snorm8x4", "uint16x2", "uint16x4", "sint16x2", "sint16x4", "unorm16x2", "unorm16x4", "snorm16x2", "snorm16x4", "float16x2", "float16x4", "float32", "float32x2", "float32x3", "float32x4", "uint32", "uint32x2", "uint32x3", "uint32x4", "sint32", "sint32x2", "sint32x3", "sint32x4", "unorm10-10-10-2",};
GPUVertexFormat
Vertex format | Data type | Components | Byte size | Example WGSL type |
---|---|---|---|---|
"uint8x2" | unsigned int | 2 | 2 | vec2<u32> |
"uint8x4" | unsigned int | 4 | 4 | vec4<u32> |
"sint8x2" | signed int | 2 | 2 | vec2<i32> |
"sint8x4" | signed int | 4 | 4 | vec4<i32> |
"unorm8x2" | unsigned normalized | 2 | 2 | vec2<f32> |
"unorm8x4" | unsigned normalized | 4 | 4 | vec4<f32> |
"snorm8x2" | signed normalized | 2 | 2 | vec2<f32> |
"snorm8x4" | signed normalized | 4 | 4 | vec4<f32> |
"uint16x2" | unsigned int | 2 | 4 | vec2<u32> |
"uint16x4" | unsigned int | 4 | 8 | vec4<u32> |
"sint16x2" | signed int | 2 | 4 | vec2<i32> |
"sint16x4" | signed int | 4 | 8 | vec4<i32> |
"unorm16x2" | unsigned normalized | 2 | 4 | vec2<f32> |
"unorm16x4" | unsigned normalized | 4 | 8 | vec4<f32> |
"snorm16x2" | signed normalized | 2 | 4 | vec2<f32> |
"snorm16x4" | signed normalized | 4 | 8 | vec4<f32> |
"float16x2" | float | 2 | 4 | vec2<f16> |
"float16x4" | float | 4 | 8 | vec4<f16> |
"float32" | float | 1 | 4 | f32 |
"float32x2" | float | 2 | 8 | vec2<f32> |
"float32x3" | float | 3 | 12 | vec3<f32> |
"float32x4" | float | 4 | 16 | vec4<f32> |
"uint32" | unsigned int | 1 | 4 | u32 |
"uint32x2" | unsigned int | 2 | 8 | vec2<u32> |
"uint32x3" | unsigned int | 3 | 12 | vec3<u32> |
"uint32x4" | unsigned int | 4 | 16 | vec4<u32> |
"sint32" | signed int | 1 | 4 | i32 |
"sint32x2" | signed int | 2 | 8 | vec2<i32> |
"sint32x3" | signed int | 3 | 12 | vec3<i32> |
"sint32x4" | signed int | 4 | 16 | vec4<i32> |
"unorm10-10-10-2" | unsigned normalized | 4 | 4 | vec4<f32> |
enum { "vertex", "instance",};
GPUVertexStepMode
The step mode configures how an address for vertex buffer data is computed, based on thecurrent vertex or instance index:
"vertex"
-
The address is advanced by
arrayStride
for each vertex,and reset between instances. "instance"
-
The address is advanced by
arrayStride
for each instance.
dictionary : GPUProgrammableStage {
GPUVertexState sequence <GPUVertexBufferLayout?> buffers = [];};
buffers
, of typesequence<GPUVertexBufferLayout?>
, defaulting to[]
-
A list of
GPUVertexBufferLayout
s, each defining the layout of vertex attribute data in avertex buffer used by this pipeline.
A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride
is the stride, in bytes, between elements of that array.Each element of a vertex buffer is like a structure with a memory layout defined by its attributes
, which describe the members of the structure.
Each GPUVertexAttribute
describes its format
and its offset
, in bytes, within the structure.
Each attribute appears as a separate input in a vertex shader, each bound by a numeric location,which is specified by shaderLocation
.Every location must be unique within the GPUVertexState
.
dictionary {
GPUVertexBufferLayout required GPUSize64 arrayStride; GPUVertexStepMode stepMode = "vertex";required sequence <GPUVertexAttribute> attributes;};
arrayStride
, of type GPUSize64-
The stride, in bytes, between elements of this array.
stepMode
, of type GPUVertexStepMode, defaulting to"vertex"
-
Whether each element of this array represents per-vertex data or per-instance data
attributes
, of type sequence<GPUVertexAttribute>-
An array defining the layout of the vertex attributes within each element.
dictionary {
GPUVertexAttribute required GPUVertexFormat format;required GPUSize64 offset;required GPUIndex32 shaderLocation;};
format
, of type GPUVertexFormat-
The
GPUVertexFormat
of the attribute. offset
, of type GPUSize64-
The offset, in bytes, from the beginning of the element to the data for the attribute.
shaderLocation
, of type GPUIndex32-
The numeric location associated with this attribute, which will correspond with a "@location" attribute declared in the
vertex
.module
.
validating GPUVertexBufferLayout(device, descriptor, vertexStage)
Arguments:
-
GPUDevice
device -
GPUVertexBufferLayout
descriptor -
GPUProgrammableStage
vertexStage
Return true
, if and only if, all of the following conditions are satisfied:
-
descriptor.
arrayStride
≤ device.[[device]]
.[[limits]]
.maxVertexBufferArrayStride
. -
descriptor.
arrayStride
is a multiple of 4. -
For each attribute attrib in the list descriptor.
attributes
:-
If descriptor.
arrayStride
is zero:-
attrib.
offset
+ sizeof(attrib.format
) ≤ device.[[device]]
.[[limits]]
.maxVertexBufferArrayStride
.
Otherwise:
-
attrib.
offset
+ sizeof(attrib.format
) ≤ descriptor.arrayStride
.
-
-
attrib.
offset
is a multiple of the minimum of 4 andsizeof(attrib.format
). -
attrib.
shaderLocation
is < device.[[device]]
.[[limits]]
.maxVertexAttributes
.
-
-
Let entryPoint be get the entry point(
VERTEX
, vertexStage). Assert it is notnull
.For every vertex attribute var statically used by entryPoint,there is a corresponding attrib element of descriptor.attributes
for whichall of the following are true:-
The type T of var is compatible with attrib.
format
's vertex data type:- "unorm", "snorm", or "float"
-
T must be
f32
orvecN<f32>
. - "uint"
-
T must be
u32
orvecN<u32>
. - "sint"
-
T must be
i32
orvecN<i32>
.
-
The shader location is attrib.
shaderLocation
.
-
validating GPUVertexState(device, descriptor, layout)
Arguments:
-
GPUDevice
device -
GPUVertexState
descriptor -
GPUPipelineLayout
layout
Return true
, if and only if, all of the following conditions are satisfied:
-
validating GPUProgrammableStage(
VERTEX
, descriptor, layout) succeeds. -
descriptor.
buffers
.length is ≤ device.[[device]]
.[[limits]]
.maxVertexBuffers
. -
Each vertexBuffer layout descriptor in the list descriptor.
buffers
passes validating GPUVertexBufferLayout(device, vertexBuffer, descriptor) -
The sum of vertexBuffer.
attributes
.length,over every vertexBuffer in descriptor.buffers
,is ≤ device.[[device]]
.[[limits]]
.maxVertexAttributes
. -
Each attrib in the union of all
GPUVertexAttribute
across descriptor.buffers
has a distinct attrib.shaderLocation
value.
11. Copies
11.1. Buffer Copies
Buffer copy operations operate on raw bytes.
WebGPU provides "buffered" GPUCommandEncoder
commands:
-
copyBufferToBuffer()
-
clearBuffer()
and "immediate" GPUQueue
operations:
-
writeBuffer()
, forArrayBuffer
-to-GPUBuffer
writes
11.2. Image Copies
Image copy operations operate on texture/"image" data, rather than bytes.
WebGPU provides "buffered" GPUCommandEncoder
commands:
-
copyTextureToTexture()
-
copyBufferToTexture()
-
copyTextureToBuffer()
and "immediate" GPUQueue
operations:
-
writeTexture()
, forArrayBuffer
-to-GPUTexture
writes -
copyExternalImageToTexture()
, for copies from Web Platform image sources to textures
Some texel values have multiple possible representations of some values,e.g. as r8snorm
, -1.0 can be represented as either -127 or -128.Copy commands are not guaranteed to preserve the source’s bit-representation.
The following definitions are used by these methods.
11.2.1. GPUImageDataLayout
dictionary GPUImageDataLayout { GPUSize64 offset = 0; GPUSize32 bytesPerRow; GPUSize32 rowsPerImage;};
A GPUImageDataLayout
is a layout of images within some linear memory.It’s used when copying data between a texture and a GPUBuffer
, or when scheduling awrite into a texture from the GPUQueue
.
-
For
2d
textures, data is copied between one or multiple contiguous images and array layers. -
For
3d
textures, data is copied between one or multiple contiguous images and depth slices.
Define images more precisely. In particular, define them as being comprised of texel blocks.
Operations that copy between byte arrays and textures always work with rows of texel blocks,which we’ll call block rows. It’s not possible to update only a part of a texel block.
Texel blocks are tightly packed within each block row in the linear memory layout of animage copy, with each subsequent texel block immediately following the previous texel block,with no padding.This includes copies to/from specific aspects of depth-or-stencil format textures:stencil values are tightly packed in an array of bytes;depth values are tightly packed in an array of the appropriate type ("depth16unorm" or "depth32float").
Define the exact copy semantics, by reference to common algorithms shared by the copy methods.
offset
, of type GPUSize64, defaulting to0
-
The offset, in bytes, from the beginning of the image data source (such as a
GPUImageCopyBuffer.buffer
) to the start of the image datawithin that source. bytesPerRow
, of type GPUSize32-
The stride, in bytes, between the beginning of each block row and the subsequent block row.
Required if there are multiple block rows (i.e. the copy height or depth is more than one block).
rowsPerImage
, of type GPUSize32-
Number of block rows per single image of the texture.
rowsPerImage
×bytesPerRow
is the stride, in bytes, between the beginning of each image of data and the subsequent image.Required if there are multiple images (i.e. the copy depth is more than one).
11.2.2. GPUImageCopyBuffer
In an image copy operation, GPUImageCopyBuffer
defines a GPUBuffer
and, together withthe copySize
, how image data is laid out in the buffer’s memory (see GPUImageDataLayout
).
dictionary GPUImageCopyBuffer : GPUImageDataLayout {required GPUBuffer buffer;};
buffer
, of type GPUBuffer-
A buffer which either contains image data to be copied or will store the image data beingcopied, depending on the method it is being passed to.
validating GPUImageCopyBuffer
Arguments:
-
GPUImageCopyBuffer
imageCopyBuffer
Returns: boolean
-
Return
true
if and only if all of the following conditions are satisfied:-
imageCopyBuffer.
buffer
must be a validGPUBuffer
. -
imageCopyBuffer.
bytesPerRow
must be a multiple of 256.
-
11.2.3. GPUImageCopyTexture
In an image copy operation, a GPUImageCopyTexture
defines a GPUTexture
and, together withthe copySize
, the sub-region of the texture (spanning one or more contiguous texture subresources at the same mip-map level).
dictionary GPUImageCopyTexture {required GPUTexture texture; GPUIntegerCoordinate mipLevel = 0; GPUOrigin3D origin = {}; GPUTextureAspect aspect = "all";};
texture
, of type GPUTexture-
Texture to copy to/from.
mipLevel
, of type GPUIntegerCoordinate, defaulting to0
-
Mip-map level of the
texture
to copy to/from. origin
, of type GPUOrigin3D, defaulting to{}
-
Defines the origin of the copy - the minimum corner of the texture sub-region to copy to/from.Together with
copySize
, defines the full copy sub-region. aspect
, of type GPUTextureAspect, defaulting to"all"
-
Defines which aspects of the
texture
to copy to/from.
validating GPUImageCopyTexture
Arguments:
-
GPUImageCopyTexture
imageCopyTexture -
GPUExtent3D
copySize
Returns: boolean
-
Let blockWidth be the texel block width of imageCopyTexture.
texture
.format
. -
Let blockHeight be the texel block height of imageCopyTexture.
texture
.format
. -
Return
true
if and only if all of the following conditions apply:-
imageCopyTexture.
texture
must be a validGPUTexture
. -
imageCopyTexture.
mipLevel
must be < imageCopyTexture.texture
.mipLevelCount
. -
imageCopyTexture.
origin
.x must be a multiple of blockWidth. -
imageCopyTexture.
origin
.y must be a multiple of blockHeight. -
The imageCopyTexture physical subresource size of imageCopyTexture is equal to copySize if either ofthe following conditions is true:
-
imageCopyTexture.
texture
.format
is a depth-stencil format. -
imageCopyTexture.
texture
.sampleCount
> 1.
-
-
Define the copies with 1d
and 3d
textures. [Issue #gpuweb/gpuweb#69]
11.2.4. GPUImageCopyTextureTagged
WebGPU textures hold raw numeric data, and are not tagged with semantic metadata describing colors.However, copyExternalImageToTexture()
copies from sources that describe colors.
A GPUImageCopyTextureTagged
is a GPUImageCopyTexture
which is additionally tagged withcolor space/encoding and alpha-premultiplication metadata, so that semantic color data may bepreserved during copies.This metadata affects only the semantics of the copyExternalImageToTexture()
operation, not the semantics of the destination texture.
dictionary GPUImageCopyTextureTagged : GPUImageCopyTexture {PredefinedColorSpace colorSpace = "srgb";boolean premultipliedAlpha =false ;};
colorSpace
, of type PredefinedColorSpace, defaulting to"srgb"
-
Describes the color space and encoding used to encode data into the destination texture.
This may result in values outside of the range [0, 1]being written to the target texture, if its format can represent them.Otherwise, the results are clamped to the target texture format’s range.
Note: If
colorSpace
matches the source image,conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision. premultipliedAlpha
, of type boolean, defaulting tofalse
-
Describes whether the data written into the texture should have its RGB channelspremultiplied by the alpha channel, or not.
If this option is set to
true
and thesource
is alsopremultiplied, the source RGB values must be preserved even if they exceed theircorresponding alpha values.Note: If
premultipliedAlpha
matches the source image,conversion may not be necessary. See § 3.10.2 Color Space Conversion Elision.
11.2.5. GPUImageCopyExternalImage
typedef (ImageBitmap or ImageData or HTMLImageElement or HTMLVideoElement or VideoFrame or HTMLCanvasElement or OffscreenCanvas );
GPUImageCopyExternalImageSource dictionary GPUImageCopyExternalImage {required GPUImageCopyExternalImageSource source; GPUOrigin2D origin = {};boolean flipY =false ;};
GPUImageCopyExternalImage
has the following members:
source
, of type GPUImageCopyExternalImageSource-
The source of the image copy. The copy source data is captured at the moment that
copyExternalImageToTexture()
is issued. Source size is defined by sourcetype, given by this table: origin
, of type GPUOrigin2D, defaulting to{}
-
Defines the origin of the copy - the minimum (top-left) corner of the source sub-region to copy from.Together with
copySize
, defines the full copy sub-region. flipY
, of type boolean, defaulting tofalse
-
Describes whether the source image is vertically flipped, or not.
If this option is set to
true
, the copy is flipped vertically: the bottom row of the sourceregion is copied into the first row of the destination region, and so on.Theorigin
option is still relative to the top-left cornerof the source image, increasing downward.
11.2.6. Subroutines
imageCopyTexture physical subresource size
Arguments:
-
GPUImageCopyTexture
imageCopyTexture
Returns: GPUExtent3D
The imageCopyTexture physical subresource size of imageCopyTexture is calculated as follows:
Its width, height and depthOrArrayLayers are the width, height, and depth, respectively, of the physical miplevel-specific texture extent of imageCopyTexture.texture
subresource at mipmap level imageCopyTexture.mipLevel
.
validating linear texture data(layout, byteSize, format, copyExtent)
Arguments:
GPUImageDataLayout
layout-
Layout of the linear texture data.
GPUSize64
byteSize-
Total size of the linear data, in bytes.
GPUTextureFormat
format-
Format of the texture.
GPUExtent3D
copyExtent-
Extent of the texture to copy.
-
Let:
-
Fail if the following input validation requirements are not met:
-
If heightInBlocks > 1, layout.
bytesPerRow
must be specified. -
If copyExtent.depthOrArrayLayers > 1, layout.
bytesPerRow
and layout.rowsPerImage
must be specified. -
If specified, layout.
bytesPerRow
must be ≥ bytesInLastRow. -
If specified, layout.
rowsPerImage
must be ≥ heightInBlocks.
-
-
Let:
-
bytesPerRow be layout.
bytesPerRow
?? 0. -
rowsPerImage be layout.
rowsPerImage
?? 0.
Note: These default values have no effect, as they’re always multiplied by 0.
-
-
Let requiredBytesInCopy be 0.
-
If copyExtent.depthOrArrayLayers > 0:
-
Increment requiredBytesInCopy by bytesPerRow × rowsPerImage × (copyExtent.depthOrArrayLayers − 1).
-
If heightInBlocks > 0:
-
Increment requiredBytesInCopy by bytesPerRow × (heightInBlocks − 1) + bytesInLastRow.
-
-
-
Fail if the following condition is not satisfied:
-
The layout fits inside the linear data: layout.
offset
+ requiredBytesInCopy ≤ byteSize.
-
validating texture copy range
Arguments:
GPUImageCopyTexture
imageCopyTexture-
The texture subresource being copied into and copy origin.
GPUExtent3D
copySize-
The size of the texture.
-
Let blockWidth be the texel block width of imageCopyTexture.
texture
.format
. -
Let blockHeight be the texel block height of imageCopyTexture.
texture
.format
. -
Let subresourceSize be the imageCopyTexture physical subresource size of imageCopyTexture.
-
Return whether all the conditions below are satisfied:
-
(imageCopyTexture.
origin
.x + copySize.width) ≤ subresourceSize.width -
(imageCopyTexture.
origin
.y + copySize.height) ≤ subresourceSize.height -
(imageCopyTexture.
origin
.z + copySize.depthOrArrayLayers) ≤ subresourceSize.depthOrArrayLayers -
copySize.width must be a multiple of blockWidth.
-
copySize.height must be a multiple of blockHeight.
Note: The texture copy range is validated against the physical (rounded-up)size for compressed formats, allowing copies to access textureblocks which are not fully inside the texture.
-
Two GPUTextureFormat
s format1 and format2 are copy-compatible if:
-
format1 equals format2, or
-
format1 and format2 differ only in whether they are
srgb
formats (have the-srgb
suffix).
The set of subresources for texture copy(imageCopyTexture, copySize) is the subset of subresources of texture = imageCopyTexture.texture
for which each subresource s satisfies the following:
-
The mipmap level of s equals imageCopyTexture.
mipLevel
. -
The aspect of s is in the set of aspects of imageCopyTexture.
aspect
. -
If texture.
dimension
is"2d"
:-
The array layer of s is ≥ imageCopyTexture.
origin
.z and < imageCopyTexture.origin
.z + copySize.depthOrArrayLayers.
-
12. Command Buffers
Command buffers are pre-recorded lists of GPU commands that can be submitted to a GPUQueue
for execution. Each GPU command represents a task to be performed on the GPU, such assetting state, drawing, copying resources, etc.
A GPUCommandBuffer
can only be submitted once, at which point it becomes invalid.To reuse rendering commands across multiple submissions, use GPURenderBundle
.
12.1. GPUCommandBuffer
[Exposed =(Window ,Worker ),SecureContext ]interface GPUCommandBuffer {};GPUCommandBufferincludes GPUObjectBase;
GPUCommandBuffer
has the following internal slots:
[[command_list]]
, of type list<GPU command>-
A list of GPU commands to be executed on the Queue timeline when this commandbuffer is submitted.
[[renderState]]
, of type RenderState-
The current state used by any render pass commands being executed, initially
null
.
12.1.1. Command Buffer Creation
dictionary : GPUObjectDescriptorBase {};
GPUCommandBufferDescriptor
13. Command Encoding
13.1. GPUCommandsMixin
GPUCommandsMixin
defines state common to all interfaces which encode commands.It has no methods.
interface mixin GPUCommandsMixin {};
GPUCommandsMixin
adds the following internal slots to interfaces which include it:
[[state]]
, of type encoder state, initially "open"-
The current state of the encoder.
[[commands]]
, of type list<GPU command>, initially[]
-
A list of GPU commands to be executed on the Queue timeline when a
GPUCommandBuffer
containing these commands is submitted.
The encoder state may be one of the following:
- "open"
-
The encoder is available to encode new commands.
- "locked"
-
The encoder cannot be used, because it is locked by a child encoder: it is a
GPUCommandEncoder
, and aGPURenderPassEncoder
orGPUComputePassEncoder
is active.The encoder becomes "open" again when the pass is ended.Any command issued in this state makes the encoder invalid.
- "ended"
-
The encoder has been ended and new commands can no longer be encoded.
Any command issued in this state will generate a validation error.
To Validate the encoder state of GPUCommandsMixin
encoder:
If encoder.[[state]]
is:
- "open"
-
Return
true
. - "locked"
-
Make encoder invalid, and return
false
. - "ended"
-
Generate a validation error, and return
false
.
To Enqueue a command on GPUCommandsMixin
encoder which issues the steps of a GPU Command command:
-
Append command to encoder.
[[commands]]
. -
When command is executed as part of a
GPUCommandBuffer
:-
Issue the steps of command.
-
13.2. GPUCommandEncoder
[Exposed =(Window ,Worker ),SecureContext ]interface GPUCommandEncoder { GPURenderPassEncoder beginRenderPass(GPURenderPassDescriptor descriptor); GPUComputePassEncoder beginComputePass(optional GPUComputePassDescriptor descriptor = {});undefined copyBufferToBuffer( GPUBuffer source, GPUSize64 sourceOffset, GPUBuffer destination, GPUSize64 destinationOffset, GPUSize64 size);undefined copyBufferToTexture( GPUImageCopyBuffer source, GPUImageCopyTexture destination, GPUExtent3D copySize);undefined copyTextureToBuffer( GPUImageCopyTexture source, GPUImageCopyBuffer destination, GPUExtent3D copySize);undefined copyTextureToTexture( GPUImageCopyTexture source, GPUImageCopyTexture destination, GPUExtent3D copySize);undefined clearBuffer( GPUBuffer buffer,optional GPUSize64 offset = 0,optional GPUSize64 size);undefined resolveQuerySet( GPUQuerySet querySet, GPUSize32 firstQuery, GPUSize32 queryCount, GPUBuffer destination, GPUSize64 destinationOffset); GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});};GPUCommandEncoderincludes GPUObjectBase;GPUCommandEncoderincludes GPUCommandsMixin;GPUCommandEncoderincludes GPUDebugCommandsMixin;
13.2.1. Command Encoder Creation
dictionary : GPUObjectDescriptorBase {};
GPUCommandEncoderDescriptor
createCommandEncoder(descriptor)
-
Creates a
GPUCommandEncoder
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createCommandEncoder(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUCommandEncoderDescriptor
✘ ✔ Description of the GPUCommandEncoder
to create.Returns:
GPUCommandEncoder
Content timeline steps:
-
Let e be a new
GPUCommandEncoder
object. -
Issue the initialization steps on the Device timeline of this.
-
Return e.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make e invalid, and stop.
-
this is valid.
-
Describe remaining
createCommandEncoder()
validation and algorithm steps. -
Creating a GPUCommandEncoder
, encoding a command to clear a buffer, finishing the encoder to get a GPUCommandBuffer
, then submitting it to the GPUQueue
.
const commandEncoder= gpuDevice. createCommandEncoder(); commandEncoder. clearBuffer( buffer); const commandBuffer= commandEncoder. finish(); gpuDevice. queue. submit([ commandBuffer]);
13.3. Pass Encoding
beginRenderPass(descriptor)
-
Begins encoding a render pass described by descriptor.
Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.beginRenderPass(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderPassDescriptor
✘ ✘ Description of the GPURenderPassEncoder
to create.Returns:
GPURenderPassEncoder
Content timeline steps:
-
For each non-
null
colorAttachment in descriptor.colorAttachments
:-
If colorAttachment.
clearValue
is notnull
.-
? validate GPUColor shape(colorAttachment.
clearValue
).
-
-
-
Let pass be a new
GPURenderPassEncoder
object. -
Issue the initialization steps on the Device timeline of this.
-
Return pass.
Device timeline initialization steps:
-
Validate the encoder state of this.If it returns false, make pass invalid and return.
-
Set this.
[[state]]
to "locked". -
If any of the following requirements are unmet, make pass invalid and return.
-
descriptor must meet the Valid Usage rulesgiven device this.
[[device]]
. -
The set of attachments in descriptor.
colorAttachments
must be pairwise disjoint.That is, no two attachments may refer to the same region, as defined bytheview
's texture subresource range and (for"3d"
attachments)the attachment’sdepthSlice
.
-
-
Consider each texture subresource viewed by a non-
null
element of descriptor.colorAttachments
to be used asan attachment for the duration of the render pass.If a subresource is seen more than once, consider it used only once.(Attachments are already checked for overlaps in the validation rules above.)
-
Let depthStencilAttachment be descriptor.
depthStencilAttachment
,ornull
if not provided. -
If depthStencilAttachment is not
null
:-
Let depthStencilView be depthStencilAttachment.
view
. -
Consider the depth subresource of depthStencilView (if any) used for the duration of the render pass, as attachment-read if depthStencilAttachment.
depthReadOnly
is true,or as attachment otherwise. -
Consider the stencil subresource of depthStencilView (if any) used for the duration of the render pass, as attachment-read if depthStencilAttachment.
stencilReadOnly
is true,or as attachment otherwise. -
Set pass.
[[depthReadOnly]]
to depthStencilAttachment.depthReadOnly
. -
Set pass.
[[stencilReadOnly]]
to depthStencilAttachment.stencilReadOnly
.
-
-
Set pass.
[[layout]]
to derive render targets layout from pass(descriptor). -
If descriptor.
timestampWrites
is provided:-
Let timestampWrites be descriptor.
timestampWrites
. -
If timestampWrites.
beginningOfPassWriteIndex
is provided, append a GPU command to this.[[commands]]
with the following steps:-
Before the pass commands begin executing,write the current queue timestamp into index timestampWrites.
beginningOfPassWriteIndex
of timestampWrites.querySet
.
-
-
If timestampWrites.
endOfPassWriteIndex
is provided, set pass.[[endTimestampWrite]]
to a GPU command with the following steps:-
After the pass commands finish executing,write the current queue timestamp into index timestampWrites.
endOfPassWriteIndex
of timestampWrites.querySet
.
-
-
-
Set pass.
[[drawCount]]
to 0. -
Set pass.
[[maxDrawCount]]
to descriptor.maxDrawCount
. -
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
-
Let the
[[renderState]]
of the currently executingGPUCommandBuffer
be a new RenderState. -
Perform attachment loads/clears.
specify the behavior of read-only depth/stencil
-
beginComputePass(descriptor)
-
Begins encoding a compute pass described by descriptor.
Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.beginComputePass(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUComputePassDescriptor
✘ ✔ Returns:
GPUComputePassEncoder
Content timeline steps:
-
Let pass be a new
GPUComputePassEncoder
object. -
Issue the initialization steps on the Device timeline of this.
-
Return pass.
Device timeline initialization steps:
-
Validate the encoder state of this.If it returns false, make pass invalid and return.
-
Set this.
[[state]]
to "locked". -
If any of the following requirements are unmet, make pass invalid and return.
-
If descriptor.
timestampWrites
is provided:-
Validate timestampWrites(this.
[[device]]
, descriptor.timestampWrites
)must return true.
-
-
-
If descriptor.
timestampWrites
is provided:-
Let timestampWrites be descriptor.
timestampWrites
. -
If timestampWrites.
beginningOfPassWriteIndex
is provided, append a GPU command to this.[[commands]]
with the following steps:-
Before the pass commands begin executing,write the current queue timestamp into index timestampWrites.
beginningOfPassWriteIndex
of timestampWrites.querySet
.
-
-
If timestampWrites.
endOfPassWriteIndex
is provided, set pass.[[endTimestampWrite]]
to a GPU command with the following steps:-
After the pass commands finish executing,write the current queue timestamp into index timestampWrites.
endOfPassWriteIndex
of timestampWrites.querySet
.
-
-
-
13.4. Buffer Copy Commands
copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of aGPUBuffer
to a sub-region of anotherGPUBuffer
.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size) method. Parameter Type Nullable Optional Description source
GPUBuffer
✘ ✘ The GPUBuffer
to copy from.sourceOffset
GPUSize64
✘ ✘ Offset in bytes into source to begin copying from. destination
GPUBuffer
✘ ✘ The GPUBuffer
to copy to.destinationOffset
GPUSize64
✘ ✘ Offset in bytes into destination to place the copied data. size
GPUSize64
✘ ✘ Bytes to copy. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied make this invalid and stop.
-
source is valid to use with this.
-
destination is valid to use with this.
-
source.
usage
containsCOPY_SRC
. -
destination.
usage
containsCOPY_DST
. -
size is a multiple of 4.
-
sourceOffset is a multiple of 4.
-
destinationOffset is a multiple of 4.
-
source.
size
≥ (sourceOffset + size). -
destination.
size
≥ (destinationOffset + size). -
source and destination are not the same
GPUBuffer
.
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
-
Copy size bytes of source, beginning at sourceOffset, into destination,beginning at destinationOffset.
-
clearBuffer(buffer, offset, size)
-
Encode a command into the
GPUCommandEncoder
that fills a sub-region of aGPUBuffer
with zeros.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.clearBuffer(buffer, offset, size) method. Parameter Type Nullable Optional Description buffer
GPUBuffer
✘ ✘ The GPUBuffer
to clear.offset
GPUSize64
✘ ✔ Offset in bytes into buffer where the sub-region to clear begins. size
GPUSize64
✘ ✔ Size in bytes of the sub-region to clear. Defaults to the size of the buffer minus offset. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If size is missing, set size to
max(0, |buffer|.{{GPUBuffer/size}} - |offset|)
. -
If any of the following conditions are unsatisfied make this invalid and stop.
-
buffer is valid to use with this.
-
buffer.
usage
containsCOPY_DST
. -
size is a multiple of 4.
-
offset is a multiple of 4.
-
buffer.
size
≥ (offset + size).
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
-
Set size bytes of buffer to
0
starting at offset.
-
13.5. Image Copy Commands
copyBufferToTexture(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of aGPUBuffer
to a sub-region of one or multiple continuous texture subresources.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyBufferToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyBuffer
✘ ✘ Combined with copySize, defines the region of the source buffer. destination
GPUImageCopyTexture
✘ ✘ Combined with copySize, defines the region of the destination texture subresource. copySize
GPUExtent3D
✘ ✘ Returns:
undefined
Content timeline steps:
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Let dstTexture be destination.
texture
. -
validating GPUImageCopyBuffer(source) returns
true
. -
source.
buffer
.usage
containsCOPY_SRC
. -
validating GPUImageCopyTexture(destination, copySize) returns
true
. -
dstTexture.
usage
containsCOPY_DST
. -
dstTexture.
sampleCount
is 1. -
Let aspectSpecificFormat = dstTexture.
format
. -
If dstTexture.
format
is a depth-or-stencil format:-
destination.
aspect
must refer to a single aspect of dstTexture.format
. -
That aspect must be a valid image copy destination according to § 26.1.2 Depth-stencil formats.
-
Set aspectSpecificFormat to the aspect-specific format according to § 26.1.2 Depth-stencil formats.
-
-
validating texture copy range(destination, copySize) return
true
. -
If dstTexture.
format
is not a depth-or-stencil format:-
source.
offset
is a multiple of the texel block copy footprint of dstTexture.format
.
-
-
If dstTexture.
format
is a depth-or-stencil format:-
source.
offset
is a multiple of 4.
-
-
validating linear texture data(source, source.
buffer
.size
, aspectSpecificFormat, copySize) succeeds.
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
Define copy, including provision for snorm.
-
copyTextureToBuffer(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of one ormultiple continuous texture subresources to a sub-region of aGPUBuffer
.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyTextureToBuffer(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyTexture
✘ ✘ Combined with copySize, defines the region of the source texture subresources. destination
GPUImageCopyBuffer
✘ ✘ Combined with copySize, defines the region of the destination buffer. copySize
GPUExtent3D
✘ ✘ Returns:
undefined
Content timeline steps:
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Let srcTexture be source.
texture
. -
validating GPUImageCopyTexture(source, copySize) returns
true
. -
srcTexture.
usage
containsCOPY_SRC
. -
srcTexture.
sampleCount
is 1. -
Let aspectSpecificFormat = srcTexture.
format
. -
If srcTexture.
format
is a depth-or-stencil format format:-
source.
aspect
must refer to a single aspect of srcTexture.format
. -
That aspect must be a valid image copy source according to § 26.1.2 Depth-stencil formats.
-
Set aspectSpecificFormat to the aspect-specific format according to § 26.1.2 Depth-stencil formats.
-
-
validating GPUImageCopyBuffer(destination) returns
true
. -
destination.
buffer
.usage
containsCOPY_DST
. -
validating texture copy range(source, copySize) returns
true
. -
If srcTexture.
format
is not a depth-or-stencil format:-
destination.
offset
is a multiple of the texel block copy footprint of srcTexture.format
.
-
-
If srcTexture.
format
is a depth-or-stencil format:-
destination.
offset
is a multiple of 4.
-
-
validating linear texture data(destination, destination.
buffer
.size
, aspectSpecificFormat, copySize) succeeds.
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
Define copy, including provision for snorm.
-
copyTextureToTexture(source, destination, copySize)
-
Encode a command into the
GPUCommandEncoder
that copies data from a sub-region of oneor multiple contiguous texture subresources to another sub-region of one ormultiple continuous texture subresources.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.copyTextureToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyTexture
✘ ✘ Combined with copySize, defines the region of the source texture subresources. destination
GPUImageCopyTexture
✘ ✘ Combined with copySize, defines the region of the destination texture subresources. copySize
GPUExtent3D
✘ ✘ Returns:
undefined
Content timeline steps:
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Let srcTexture be source.
texture
. -
Let dstTexture be destination.
texture
. -
validating GPUImageCopyTexture(source, copySize) returns
true
. -
srcTexture.
usage
containsCOPY_SRC
. -
validating GPUImageCopyTexture(destination, copySize) returns
true
. -
dstTexture.
usage
containsCOPY_DST
. -
srcTexture.
sampleCount
is equal to dstTexture.sampleCount
. -
srcTexture.
format
and dstTexture.format
must be copy-compatible. -
If srcTexture.
format
is a depth-stencil format:-
source.
aspect
and destination.aspect
must both refer to all aspects of srcTexture.format
and dstTexture.format
, respectively.
-
-
validating texture copy range(source, copySize) returns
true
. -
validating texture copy range(destination, copySize) returns
true
. -
The set of subresources for texture copy(source, copySize) andthe set of subresources for texture copy(destination, copySize) are disjoint.
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
Define copy, including provision for snorm.
-
13.6. Queries
resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset)
-
Resolves query results from a
GPUQuerySet
out into a range of aGPUBuffer
.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.resolveQuerySet(querySet, firstQuery, queryCount, destination, destinationOffset) method. Parameter Type Nullable Optional Description querySet
GPUQuerySet
✘ ✘ firstQuery
GPUSize32
✘ ✘ queryCount
GPUSize32
✘ ✘ destination
GPUBuffer
✘ ✘ destinationOffset
GPUSize64
✘ ✘ Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
querySet is valid to use with this.
-
destination is valid to use with this.
-
destination.
usage
containsQUERY_RESOLVE
. -
firstQuery < the number of queries in querySet.
-
(firstQuery + queryCount) ≤ the number of queries in querySet.
-
destinationOffset is a multiple of 256.
-
destinationOffset + 8 × queryCount ≤ destination.
size
.
-
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline when executed.
Queue timeline steps:
-
Let queryIndex be firstQuery.
-
Let offset be destinationOffset.
-
While queryIndex < firstQuery + queryCount:
-
Set 8 bytes of destination, beginning at offset, to be the value of querySet at queryIndex.
-
Set queryIndex to be queryIndex + 1.
-
Set offset to be offset + 8.
-
-
13.7. Finalization
A GPUCommandBuffer
containing the commands recorded by the GPUCommandEncoder
can be createdby calling finish()
. Once finish()
has been called thecommand encoder can no longer be used.
finish(descriptor)
-
Completes recording of the commands sequence and returns a corresponding
GPUCommandBuffer
.Called on:
GPUCommandEncoder
this.Arguments:
Arguments for the GPUCommandEncoder.finish(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUCommandBufferDescriptor
✘ ✔ Returns:
GPUCommandBuffer
Content timeline steps:
-
Let commandBuffer be a new
GPUCommandBuffer
. -
Issue the finish steps on the Device timeline of this.
[[device]]
. -
Return commandBuffer.
Device timeline finish steps:
-
Let validationSucceeded be
true
if all of the following requirements are met, andfalse
otherwise.-
this must be valid.
-
this.
[[state]]
must be "open". -
this.
[[debug_group_stack]]
must be empty. -
Every usage scope contained in this must satisfy the usage scope validation.
-
-
Set this.
[[state]]
to "ended". -
If validationSucceeded is
false
, then:-
Generate a validation error.
-
Return a new invalid
GPUCommandBuffer
.
-
-
Set commandBuffer.
[[command_list]]
to this.[[commands]]
.
-
14. Programmable Passes
interface mixin {
GPUBindingCommandsMixin undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,optional sequence <GPUBufferDynamicOffset> dynamicOffsets = []);undefined setBindGroup(GPUIndex32 index, GPUBindGroup? bindGroup,Uint32Array dynamicOffsetsData, GPUSize64 dynamicOffsetsDataStart, GPUSize32 dynamicOffsetsDataLength);};
GPUBindingCommandsMixin
assumes the presence of GPUObjectBase
and GPUCommandsMixin
members on the same object.It must only be included by interfaces which also include those mixins.
GPUBindingCommandsMixin
has the following internal slots:
[[bind_groups]]
, of type ordered map<GPUIndex32
,GPUBindGroup
>-
The current
GPUBindGroup
for each index, initially empty. [[dynamic_offsets]]
, of type ordered map<GPUIndex32
, list<GPUBufferDynamicOffset
> >-
The current dynamic offsets for each
[[bind_groups]]
entry, initially empty.
14.1. Bind Groups
setBindGroup() has two overloads:
setBindGroup(index, bindGroup, dynamicOffsets)
-
Sets the current
GPUBindGroup
for the given index.Called on:
GPUBindingCommandsMixin
this.Arguments:
index
, of typeGPUIndex32
, non-nullable, required-
The index to set the bind group at.
bindGroup
, of typeGPUBindGroup
, nullable, required-
Bind group to use for subsequent render or compute commands.
dynamicOffsets
, of type sequence<GPUBufferDynamicOffset
>, non-nullable, defaulting to[]
-
Array containing buffer offsets in bytes for each entry in bindGroup marked as
buffer
.hasDynamicOffset
.
Returns:
undefined
Note: dynamicOffsets[i] is used for the i-th dynamic buffer binding in the bind group, when bindings are ordered by
GPUBindGroupLayoutEntry
.binding
. Said differently dynamicOffsets are in the same order as dynamic buffer binding’sGPUBindGroupLayoutEntry
.binding
.Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
Let dynamicOffsetCount be 0 if
bindGroup
isnull
, or bindGroup.[[layout]]
.[[dynamicOffsetCount]]
if not. -
If any of the following requirements are unmet, make this invalid and stop.
-
index must be < this.
[[device]]
.[[limits]]
.maxBindGroups
. -
dynamicOffsets.length must equal dynamicOffsetCount.
-
-
If bindGroup is
null
:Otherwise:
-
If any of the following requirements are unmet, make this invalid and stop.
-
bindGroup must be valid to use with this.
-
For each dynamic binding (bufferBinding, bufferLayout, dynamicOffsetIndex) in bindGroup:
-
bufferBinding.
offset
+ dynamicOffsets[dynamicOffsetIndex] + bufferLayout.minBindingSize
must be ≤ bufferBinding.buffer
.size
. -
If bufferLayout.
type
is"uniform"
:-
dynamicOffset must be a multiple of
minUniformBufferOffsetAlignment
.
-
-
If bufferLayout.
type
is"storage"
or"read-only-storage"
:-
dynamicOffset must be a multiple of
minStorageBufferOffsetAlignment
.
-
-
-
-
Set this.
[[bind_groups]]
[index] to be bindGroup. -
Set this.
[[dynamic_offsets]]
[index] to be a copy of dynamicOffsets.
-
setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength)
-
Sets the current
GPUBindGroup
for the given index, specifying dynamic offsets as a subsetof aUint32Array
.Called on:
GPUBindingCommandsMixin
this.Arguments:
Arguments for the GPUBindingCommandsMixin.setBindGroup(index, bindGroup, dynamicOffsetsData, dynamicOffsetsDataStart, dynamicOffsetsDataLength) method. Parameter Type Nullable Optional Description index
GPUIndex32
✘ ✘ The index to set the bind group at. bindGroup
GPUBindGroup?
✔ ✘ Bind group to use for subsequent render or compute commands. dynamicOffsetsData
Uint32Array
✘ ✘ Array containing buffer offsets in bytes for each entry in bindGroup marked as buffer
.hasDynamicOffset
.dynamicOffsetsDataStart
GPUSize64
✘ ✘ Offset in elements into dynamicOffsetsData where the buffer offset data begins. dynamicOffsetsDataLength
GPUSize32
✘ ✘ Number of buffer offsets to read from dynamicOffsetsData. Returns:
undefined
Content timeline steps:
-
If any of the following requirements are unmet, throw a
RangeError
and stop.-
dynamicOffsetsDataStart must be ≥ 0.
-
dynamicOffsetsDataStart + dynamicOffsetsDataLength must be ≤ dynamicOffsetsData.
length
.
-
-
Let dynamicOffsets be a list containing the range, starting at index dynamicOffsetsDataStart, of dynamicOffsetsDataLength elements of a copy of dynamicOffsetsData.
-
Call this.
setBindGroup
(index, bindGroup, dynamicOffsets).
-
To Iterate over each dynamic binding offset in a given GPUBindGroup
bindGroup with a given list of steps to be executed for each dynamic offset:
-
Let dynamicOffsetIndex be
0
. -
Let layout be bindGroup.
[[layout]]
. -
For each
GPUBindGroupEntry
entry in bindGroup.[[entries]]
ordered in increasing values of entry.binding
:-
Let bindingDescriptor be the
GPUBindGroupLayoutEntry
at layout.[[entryMap]]
[entry.binding
]: -
If bindingDescriptor.
buffer
?.hasDynamicOffset
istrue
:-
Let bufferBinding be entry.
resource
. -
Let bufferLayout be bindingDescriptor.
buffer
. -
Call steps with bufferBinding, bufferLayout, and dynamicOffsetIndex.
-
Let dynamicOffsetIndex be dynamicOffsetIndex +
1
-
-
Validate encoder bind groups(encoder, pipeline)
Arguments:
GPUBindingCommandsMixin
encoder-
Encoder whose bind groups are being validated.
GPUPipelineBase
pipeline-
Pipeline to validate encoders bind groups are compatible with.
-
If any of the following conditions are unsatisfied, return
false
:-
pipeline must not be
null
. -
All bind groups used by the pipeline must be set and compatible with the pipeline layout:For each pair of (
GPUIndex32
index,GPUBindGroupLayout
bindGroupLayout) in pipeline.[[layout]]
.[[bindGroupLayouts]]
:-
Let bindGroup be encoder.
[[bind_groups]]
[index]. -
bindGroup must not be
null
. -
bindGroup.
[[layout]]
must be group-equivalent with bindGroupLayout.
-
-
For buffer bindings that weren’t prevalidated with
minBindingSize
, the binding ranges must be large enough forthe minimum buffer binding size.Formalize this check.
-
Encoder bind groups alias a writable resource(encoder, pipeline) must be
false
.
-
Otherwise return true
.
Encoder bind groups alias a writable resource(encoder, pipeline) if any writable buffer binding range overlaps with any other binding range of the same buffer, or any writable texture binding overlaps in texture subresources with any other texture binding (which may use the same or a different GPUTextureView
object).
Arguments:
GPUBindingCommandsMixin
encoder-
Encoder whose bind groups are being validated.
GPUPipelineBase
pipeline-
Pipeline to validate encoders bind groups are compatible with.
-
For each stage in [
VERTEX
,FRAGMENT
,COMPUTE
]:-
Let bufferBindings be a list of (
GPUBufferBinding
,boolean
) pairs,where the latter indicates whether the resource was used as writable. -
Let textureViews be a list of (
GPUTextureView
,boolean
) pairs,where the latter indicates whether the resource was used as writable. -
For each pair of (
GPUIndex32
bindGroupIndex,GPUBindGroupLayout
bindGroupLayout) in pipeline.[[layout]]
.[[bindGroupLayouts]]
:-
Let bindGroup be encoder.
[[bind_groups]]
[bindGroupIndex]. -
Let bindGroupLayoutEntries be bindGroupLayout.
[[descriptor]]
.entries
. -
Let bufferRanges be the bound buffer ranges of bindGroup,given dynamic offsets encoder.
[[dynamic_offsets]]
[bindGroupIndex] -
For each (
GPUBindGroupLayoutEntry
bindGroupLayoutEntry,GPUBufferBinding
resource) in bufferRanges, in which bindGroupLayoutEntry.visibility
contains stage:-
Let resourceWritable be (bindGroupLayoutEntry.
buffer
.type
=="storage"
). -
For each pair (
GPUBufferBinding
pastResource,boolean
pastResourceWritable) in bufferBindings:-
If (resourceWritable or pastResourceWritable) is true, and pastResource and resource are buffer-binding-aliasing, return
true
.
-
-
Append (resource, resourceWritable) to bufferBindings.
-
-
For each
GPUBindGroupLayoutEntry
bindGroupLayoutEntry in bindGroupLayoutEntries, and correspondingGPUTextureView
resource in bindGroup, in which bindGroupLayoutEntry.visibility
contains stage:-
Let resourceWritable be whether bindGroupLayoutEntry.
storageTexture
.access
is a writable access mode. -
If bindGroupLayoutEntry.
storageTexture
is not provided, continue. -
For each pair (
GPUTextureView
pastResource,boolean
pastResourceWritable) in textureViews,-
If (resourceWritable or pastResourceWritable) is true, and pastResource and resource is texture-view-aliasing, return
true
.
-
-
Append (resource, resourceWritable) to textureViews.
-
-
-
-
Return
false
.
Note: Implementations are strongly encouraged to optimize this algorithm.
15. Debug Markers
GPUDebugCommandsMixin
provides methods to apply debug labels to groupsof commands or insert a single label into the command sequence.
Debug groups can be nested to create a hierarchy of labeled commands, and must be well-balanced.
Like object labels
, these labels have no required behavior, but may be shownin error messages and browser developer tools, and may be passed to native API backends.
interface mixin GPUDebugCommandsMixin {undefined pushDebugGroup(USVString groupLabel);undefined popDebugGroup();undefined insertDebugMarker(USVString markerLabel);};
GPUDebugCommandsMixin
assumes the presence of GPUObjectBase
and GPUCommandsMixin
members on the same object.It must only be included by interfaces which also include those mixins.
GPUDebugCommandsMixin
adds the following internal slots to interfaces which include it:
GPUDebugCommandsMixin
adds the following methods to interfaces which include it:
pushDebugGroup(groupLabel)
-
Begins a labeled debug group containing subsequent commands.
Called on:
GPUDebugCommandsMixin
this.Arguments:
Arguments for the GPUDebugCommandsMixin.pushDebugGroup(groupLabel) method. Parameter Type Nullable Optional Description groupLabel
USVString
✘ ✘ The label for the command group. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
Push groupLabel onto this.
[[debug_group_stack]]
.
-
popDebugGroup()
-
Ends the labeled debug group most recently started by
pushDebugGroup()
.Called on:
GPUDebugCommandsMixin
this.Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
-
insertDebugMarker(markerLabel)
-
Marks a point in a stream of commands with a label.
Called on:
GPUDebugCommandsMixin
this.Arguments:
Arguments for the GPUDebugCommandsMixin.insertDebugMarker(markerLabel) method. Parameter Type Nullable Optional Description markerLabel
USVString
✘ ✘ The label to insert. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
16. Compute Passes
16.1. GPUComputePassEncoder
[Exposed =(Window ,Worker ),SecureContext ]interface GPUComputePassEncoder {undefined setPipeline(GPUComputePipeline pipeline);undefined dispatchWorkgroups(GPUSize32 workgroupCountX,optional GPUSize32 workgroupCountY = 1,optional GPUSize32 workgroupCountZ = 1);undefined dispatchWorkgroupsIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);undefined end();};GPUComputePassEncoderincludes GPUObjectBase;GPUComputePassEncoderincludes GPUCommandsMixin;GPUComputePassEncoderincludes GPUDebugCommandsMixin;GPUComputePassEncoderincludes GPUBindingCommandsMixin;
GPUComputePassEncoder
has the following internal slots:
[[command_encoder]]
, of typeGPUCommandEncoder
, readonly-
The
GPUCommandEncoder
that created this compute pass encoder. [[pipeline]]
, of typeGPUComputePipeline
, readonly-
The current
GPUComputePipeline
, initiallynull
. [[endTimestampWrite]]
, of type GPU command?, readonly, defaulting tonull
-
GPU command, if any, writing a timestamp when the pass ends.
16.1.1. Compute Pass Encoder Creation
dictionary {
GPUComputePassTimestampWrites required GPUQuerySet querySet; GPUSize32 beginningOfPassWriteIndex; GPUSize32 endOfPassWriteIndex;};
querySet
, of type GPUQuerySet-
The
GPUQuerySet
, of type"timestamp"
, that the query results will bewritten to. beginningOfPassWriteIndex
, of type GPUSize32-
If defined, indicates the query index in
querySet
intowhich the timestamp at the beginning of the compute pass will be written. endOfPassWriteIndex
, of type GPUSize32-
If defined, indicates the query index in
querySet
intowhich the timestamp at the end of the compute pass will be written.
Note: Timestamp query values are written in nanoseconds, but how the value is determined isimplementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.
dictionary : GPUObjectDescriptorBase { GPUComputePassTimestampWrites timestampWrites;};
GPUComputePassDescriptor
timestampWrites
, of type GPUComputePassTimestampWrites-
Defines which timestamp values will be written for this pass, and where to write them to.
16.1.2. Dispatch
setPipeline(pipeline)
-
Sets the current
GPUComputePipeline
.Called on:
GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.setPipeline(pipeline) method. Parameter Type Nullable Optional Description pipeline
GPUComputePipeline
✘ ✘ The compute pipeline to use for subsequent dispatch commands. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
pipeline is valid to use with this.
-
-
Set this.
[[pipeline]]
to be pipeline.
-
dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ)
-
Dispatch work to be performed with the current
GPUComputePipeline
.See § 23.2 Computing for the detailed specification.Called on:
GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.dispatchWorkgroups(workgroupCountX, workgroupCountY, workgroupCountZ) method. Parameter Type Nullable Optional Description workgroupCountX
GPUSize32
✘ ✘ X dimension of the grid of workgroups to dispatch. workgroupCountY
GPUSize32
✘ ✔ Y dimension of the grid of workgroups to dispatch. workgroupCountZ
GPUSize32
✘ ✔ Z dimension of the grid of workgroups to dispatch. NOTE:
The
x
,y
, andz
values passed todispatchWorkgroups()
anddispatchWorkgroupsIndirect()
are the number of workgroups to dispatch for each dimension, not the number of shader invocations to perform across each dimension. This matches the behavior of modern native GPU APIs, but differs from the behavior of OpenCL.This means that if a
GPUShaderModule
defines an entry point with@workgroup_size(4, 4)
, and work is dispatched to it with the callcomputePass.dispatchWorkgroups(8, 8);
the entry point will be invoked 1024 times total: Dispatching a 4x4 workgroup 8 times along both the X and Y axes. (4*4*8*8=1024
)Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Validate encoder bind groups(this, this.
[[pipeline]]
)istrue
. -
all of workgroupCountX, workgroupCountY and workgroupCountZ are ≤ this.device.limits.
maxComputeWorkgroupsPerDimension
.
-
-
Let passState be a snapshot of this’s current state.
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline.
Queue timeline steps:
-
Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with passState.
[[pipeline]]
using passState.[[bind_groups]]
.
-
dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset)
-
Dispatch work to be performed with the current
GPUComputePipeline
using parameters readfrom aGPUBuffer
.See § 23.2 Computing for the detailed specification.The indirect dispatch parameters encoded in the buffer must be a tightlypacked block of three 32-bit unsigned integer values (12 bytes total),given in the same order as the arguments for
dispatchWorkgroups()
.For example:let dispatchIndirectParameters= new Uint32Array( 3 ); dispatchIndirectParameters[ 0 ] = workgroupCountX; dispatchIndirectParameters[ 1 ] = workgroupCountY; dispatchIndirectParameters[ 2 ] = workgroupCountZ; Called on:
GPUComputePassEncoder
this.Arguments:
Arguments for the GPUComputePassEncoder.dispatchWorkgroupsIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer
✘ ✘ Buffer containing the indirect dispatch parameters. indirectOffset
GPUSize64
✘ ✘ Offset in bytes into indirectBuffer where the dispatch data begins. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
Validate encoder bind groups(this, this.
[[pipeline]]
)istrue
. -
indirectBuffer is valid to use with this.
-
indirectBuffer.
usage
containsINDIRECT
. -
indirectOffset + sizeof(indirect dispatch parameters) ≤ indirectBuffer.
size
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as
INDIRECT
. -
Let passState be a snapshot of this’s current state.
-
Enqueue a command on this which issues the subsequent steps on the Queue timeline.
Queue timeline steps:
-
Let workgroupCountX be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
-
Let workgroupCountY be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 4) bytes.
-
Let workgroupCountZ be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 8) bytes.
-
If workgroupCountX, workgroupCountY, or workgroupCountZ is greater than this.device.limits.
maxComputeWorkgroupsPerDimension
,stop. -
Execute a grid of workgroups with dimensions [workgroupCountX, workgroupCountY, workgroupCountZ] with passState.
[[pipeline]]
using passState.[[bind_groups]]
.
-
16.1.3. Finalization
The compute pass encoder can be ended by calling end()
once the userhas finished recording commands for the pass. Once end()
has beencalled the compute pass encoder can no longer be used.
end()
-
Completes recording of the compute pass commands sequence.
Called on:
GPUComputePassEncoder
this.Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Let parentEncoder be this.
[[command_encoder]]
. -
If any of the following requirements are unmet, generate a validation error and stop.
-
this.
[[state]]
must be "open". -
parentEncoder.
[[state]]
must be "locked".
-
-
Set this.
[[state]]
to "ended". -
Set parentEncoder.
[[state]]
to "open". -
If any of the following requirements are unmet, make parentEncoder invalid and stop.
-
this must be valid.
-
this.
[[debug_group_stack]]
must be empty.
-
-
Extend parentEncoder.
[[commands]]
with this.[[commands]]
. -
If this.
[[endTimestampWrite]]
is notnull
:-
Extend parentEncoder.
[[commands]]
with this.[[endTimestampWrite]]
.
-
-
17. Render Passes
17.1. GPURenderPassEncoder
[Exposed =(Window ,Worker ),SecureContext ]interface GPURenderPassEncoder {undefined setViewport(float x,float y,float width,float height,float minDepth,float maxDepth);undefined setScissorRect(GPUIntegerCoordinate x, GPUIntegerCoordinate y, GPUIntegerCoordinate width, GPUIntegerCoordinate height);undefined setBlendConstant(GPUColor color);undefined setStencilReference(GPUStencilValue reference);undefined beginOcclusionQuery(GPUSize32 queryIndex);undefined endOcclusionQuery();undefined executeBundles(sequence <GPURenderBundle> bundles);undefined end();};GPURenderPassEncoderincludes GPUObjectBase;GPURenderPassEncoderincludes GPUCommandsMixin;GPURenderPassEncoderincludes GPUDebugCommandsMixin;GPURenderPassEncoderincludes GPUBindingCommandsMixin;GPURenderPassEncoderincludes GPURenderCommandsMixin;
GPURenderPassEncoder
has the following internal slots used for validation while encoding:
[[command_encoder]]
, of typeGPUCommandEncoder
, readonly-
The
GPUCommandEncoder
that created this render pass encoder. [[attachment_size]]
, readonly-
Set to the following extents:
-
width, height
= the dimensions of the pass’s render attachments
-
[[occlusion_query_set]]
, of typeGPUQuerySet
, readonly-
The
GPUQuerySet
to store occlusion query results for the pass, which is initialized withGPURenderPassDescriptor
.occlusionQuerySet
at pass creation time. [[occlusion_query_active]]
, of typeboolean
-
Whether the pass’s
[[occlusion_query_set]]
is being written. [[endTimestampWrite]]
, of type GPU command?, readonly, defaulting tonull
-
GPU command, if any, writing a timestamp when the pass ends.
[[maxDrawCount]]
of typeGPUSize64
, readonly-
The maximum number of draws allowed in this pass.
When executing encoded render pass commands as part of a GPUCommandBuffer
, an internal RenderState object is used to track the current state required for rendering.
RenderState contains the following internal slots used for execution of rendering commands:
[[occlusionQueryIndex]]
, of typeGPUSize32
-
The index into
[[occlusion_query_set]]
at which to store theocclusion query results. [[viewport]]
-
Current viewport rectangle and depth range. Initially set to the following values:
-
x, y
=0.0, 0.0
-
width, height
= the dimensions of the pass’s render targets -
minDepth, maxDepth
=0.0, 1.0
-
[[scissorRect]]
-
Current scissor rectangle. Initially set to the following values:
-
x, y
=0, 0
-
width, height
= the dimensions of the pass’s render targets
-
[[blendConstant]]
, of typeGPUColor
-
Current blend constant value, initially
[0, 0, 0, 0]
. [[stencilReference]]
, of typeGPUStencilValue
-
Current stencil reference value, initially
0
.
17.1.1. Render Pass Encoder Creation
dictionary {
GPURenderPassTimestampWrites required GPUQuerySet querySet; GPUSize32 beginningOfPassWriteIndex; GPUSize32 endOfPassWriteIndex;};
querySet
, of type GPUQuerySet-
The
GPUQuerySet
, of type"timestamp"
, that the query results will bewritten to. beginningOfPassWriteIndex
, of type GPUSize32-
If defined, indicates the query index in
querySet
intowhich the timestamp at the beginning of the render pass will be written. endOfPassWriteIndex
, of type GPUSize32-
If defined, indicates the query index in
querySet
intowhich the timestamp at the end of the render pass will be written.
Note: Timestamp query values are written in nanoseconds, but how the value is determined isimplementation-defined and may not increase monotonically. See § 20.4 Timestamp Query for details.
dictionary : GPUObjectDescriptorBase {
GPURenderPassDescriptor required sequence <GPURenderPassColorAttachment?> colorAttachments; GPURenderPassDepthStencilAttachment depthStencilAttachment; GPUQuerySet occlusionQuerySet; GPURenderPassTimestampWrites timestampWrites; GPUSize64 maxDrawCount = 50000000;};
colorAttachments
, of typesequence<GPURenderPassColorAttachment?>
-
The set of
GPURenderPassColorAttachment
values in this sequence defines whichcolor attachments will be output to when executing this render pass.Due to usage compatibility, no color attachmentmay alias another attachment or any resource used inside the render pass.
depthStencilAttachment
, of type GPURenderPassDepthStencilAttachment-
The
GPURenderPassDepthStencilAttachment
value that defines the depth/stencilattachment that will be output to and tested against when executing this render pass.Due to usage compatibility, no writable depth/stencil attachmentmay alias another attachment or any resource used inside the render pass.
occlusionQuerySet
, of type GPUQuerySet-
The
GPUQuerySet
value defines where the occlusion query results will be stored for this pass. timestampWrites
, of type GPURenderPassTimestampWrites-
Defines which timestamp values will be written for this pass, and where to write them to.
maxDrawCount
, of type GPUSize64, defaulting to50000000
-
The maximum number of draw calls that will be done in the render pass. Used by someimplementations to size work injected before the render pass. Keeping the default valueis a good default, unless it is known that more draw calls will be done.
Valid Usage
Given a GPUDevice
device and GPURenderPassDescriptor
this, the following validation rules apply:
-
this.
colorAttachments
.length must be ≤ device.[[limits]]
.maxColorAttachments
. -
For each non-
null
colorAttachment in this.colorAttachments
:-
colorAttachment must meet the GPURenderPassColorAttachment Valid Usage rules.
-
-
If this.
depthStencilAttachment
is provided:-
this.
depthStencilAttachment
must meet the GPURenderPassDepthStencilAttachment Valid Usage rules.
-
-
There must exist at least one attachment, either:
-
A non-
null
value in this.colorAttachments
, or -
A this.
depthStencilAttachment
.
-
-
Validating GPURenderPassDescriptor’s color attachment bytes per sample(device, this.
colorAttachments
) succeeds. -
All
view
s in non-null
members of this.colorAttachments
,and this.depthStencilAttachment
.view
if present, must have equalsampleCount
s. -
For each
view
in non-null
members of this.colorAttachments
and this.depthStencilAttachment
.view
,if present, the[[renderExtent]]
must match. -
If this.
occlusionQuerySet
is notnull
:-
this.
occlusionQuerySet
.type
must beocclusion
.
-
-
If this.
timestampWrites
is provided:-
Validate timestampWrites(device, this.
timestampWrites
)must return true.
-
Validating GPURenderPassDescriptor’s color attachment bytes per sample(GPUDevice
device, sequence<GPURenderPassColorAttachment
?> colorAttachments)
-
Let formats be an empty list<
GPUTextureFormat
?> -
For each colorAttachment in colorAttachments:
-
If colorAttachment is
undefined
, continue. -
Append colorAttachment.
view
.[[descriptor]]
.format
to formats.
-
-
Calculating color attachment bytes per sample(formats) must be ≤ device.
[[limits]]
.maxColorAttachmentBytesPerSample
.
17.1.1.1. Color Attachments
dictionary {
GPURenderPassColorAttachment required GPUTextureView view; GPUIntegerCoordinate depthSlice; GPUTextureView resolveTarget; GPUColor clearValue;required GPULoadOp loadOp;required GPUStoreOp storeOp;};
view
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will be output to for thiscolor attachment. depthSlice
, of type GPUIntegerCoordinate-
Indicates the depth slice index of
"3d"
view
that will be output to for this color attachment. resolveTarget
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will receive the resolvedoutput for this color attachment ifview
ismultisampled. clearValue
, of type GPUColor-
Indicates the value to clear
view
to prior to executing therender pass. If not provided, defaults to{r: 0, g: 0, b: 0, a: 0}
. IgnoredifloadOp
is not"clear"
.The components of
clearValue
are all double values.They are converted to a texel value of texture format matching the render attachment.If conversion fails, a validation error is generated. loadOp
, of type GPULoadOp-
Indicates the load operation to perform on
view
prior toexecuting the render pass.Note: It is recommended to prefer clearing; see
"clear"
for details. storeOp
, of type GPUStoreOp-
The store operation to perform on
view
after executing the render pass.
GPURenderPassColorAttachment Valid Usage
Given a GPURenderPassColorAttachment
this:
-
Let renderViewDescriptor be this.
view
.[[descriptor]]
. -
Let resolveViewDescriptor be this.
resolveTarget
.[[descriptor]]
. -
Let renderTexture be this.
view
.[[texture]]
. -
Let resolveTexture be this.
resolveTarget
.[[texture]]
.
The following validation rules apply:
-
renderViewDescriptor.
format
must be a color renderable format. -
this.
view
must be a renderable texture view. -
If renderViewDescriptor.
dimension
is"3d"
:-
this.
depthSlice
must be provided and mustbe < the depthOrArrayLayers of the logical miplevel-specific texture extent of the renderTexture subresource at mipmap level renderViewDescriptor.baseMipLevel
.
Otherwise:
-
this.
depthSlice
must not be provided.
-
-
If this.
loadOp
is"clear"
:-
Converting the IDL value this.
clearValue
to a texel value of texture format renderViewDescriptor.format
must not throw aTypeError
.Note: An error is not thrown if the value is out-of-range for the format but in-range forthe corresponding WGSL primitive type (
f32
,i32
, oru32
).
-
-
If this.
resolveTarget
is provided:-
renderTexture.
sampleCount
must be > 1. -
resolveTexture.
sampleCount
must be 1. -
this.
resolveTarget
must be a renderable texture view. -
The sizes of the subresources seen by this.
resolveTarget
and this.view
must match. -
resolveViewDescriptor.
format
must equal renderViewDescriptor.format
. -
resolveTexture.
format
must equal renderTexture.format
. -
resolveViewDescriptor.
format
must support resolve according to § 26.1.1 Plain color formats.
-
A GPUTextureView
view is a renderable texture view if the following requirements are met:
-
view.
[[texture]]
.usage
must containRENDER_ATTACHMENT
. -
descriptor.
dimension
must be either"2d"
or"3d"
. -
descriptor.
mipLevelCount
must be 1. -
descriptor.
arrayLayerCount
must be 1. -
descriptor.
aspect
must refer to all aspects of view.[[texture]]
.
where descriptor is view.[[descriptor]]
.
Calculating color attachment bytes per sample(formats)
Arguments:
-
sequence<
GPUTextureFormat
?> formats
Returns: GPUSize32
-
Let total be 0.
-
For each non-null format in formats
-
Assert: format is a color renderable format.
-
Let renderTargetPixelByteCost be the render target pixel byte cost of format.
-
Let renderTargetComponentAlignment be the render target component alignment of format.
-
Round total up to the smallest multiple of renderTargetComponentAlignment greater than or equal to total.
-
Add renderTargetPixelByteCost to total.
-
-
Return total.
17.1.1.2. Depth/Stencil Attachments
dictionary {
GPURenderPassDepthStencilAttachment required GPUTextureView view;float depthClearValue; GPULoadOp depthLoadOp; GPUStoreOp depthStoreOp;boolean depthReadOnly =false ; GPUStencilValue stencilClearValue = 0; GPULoadOp stencilLoadOp; GPUStoreOp stencilStoreOp;boolean stencilReadOnly =false ;};
view
, of type GPUTextureView-
A
GPUTextureView
describing the texture subresource that will be output toand read from for this depth/stencil attachment. depthClearValue
, of type float-
Indicates the value to clear
view
's depth componentto prior to executing the render pass. Ignored ifdepthLoadOp
is not"clear"
. Must be between 0.0 and 1.0, inclusive. depthLoadOp
, of type GPULoadOp-
Indicates the load operation to perform on
view
'sdepth component prior to executing the render pass.Note: It is recommended to prefer clearing; see
"clear"
for details. depthStoreOp
, of type GPUStoreOp-
The store operation to perform on
view
'sdepth component after executing the render pass. depthReadOnly
, of type boolean, defaulting tofalse
-
Indicates that the depth component of
view
is read only. stencilClearValue
, of type GPUStencilValue, defaulting to0
-
Indicates the value to clear
view
's stencil componentto prior to executing the render pass. Ignored ifstencilLoadOp
is not"clear"
.The value will be converted to the type of the stencil aspect of view by taking the samenumber of LSBs as the number of bits in the stencil aspect of one texel block of view.
stencilLoadOp
, of type GPULoadOp-
Indicates the load operation to perform on
view
'sstencil component prior to executing the render pass.Note: It is recommended to prefer clearing; see
"clear"
for details. stencilStoreOp
, of type GPUStoreOp-
The store operation to perform on
view
'sstencil component after executing the render pass. stencilReadOnly
, of type boolean, defaulting tofalse
-
Indicates that the stencil component of
view
is read only.
GPURenderPassDepthStencilAttachment Valid Usage
Given a GPURenderPassDepthStencilAttachment
this, the following validation rules apply:
-
this.
view
must have a depth-or-stencil format. -
this.
view
must be a renderable texture view. -
Let format be this.
view
.[[descriptor]]
.format
. -
If this.
depthLoadOp
is"clear"
, this.depthClearValue
must be provided and must be between 0.0 and 1.0,inclusive. -
If format has a depth aspect and this.
depthReadOnly
isfalse
:-
this.
depthLoadOp
must be provided. -
this.
depthStoreOp
must be provided.
Otherwise:
-
this.
depthLoadOp
must not be provided. -
this.
depthStoreOp
must not be provided.
-
-
If format has a stencil aspect and this.
stencilReadOnly
isfalse
:-
this.
stencilLoadOp
must be provided. -
this.
stencilStoreOp
must be provided.
Otherwise:
-
this.
stencilLoadOp
must not be provided. -
this.
stencilStoreOp
must not be provided.
-
17.1.1.3. Load & Store Operations
enum { "load", "clear",};
GPULoadOp
"load"
-
Loads the existing value for this attachment into the render pass.
"clear"
-
Loads a clear value for this attachment into the render pass.
Note: On some GPU hardware (primarily mobile),
"clear"
is significantly cheaperbecause it avoids loading data from main memory into tile-local memory.On other GPU hardware, there isn’t a significant difference. As a result, it isrecommended to use"clear"
rather than"load"
in cases where theinitial value doesn’t matter (e.g. the render target will be cleared using a skybox).
enum { "store", "discard",};
GPUStoreOp
"store"
-
Stores the resulting value of the render pass for this attachment.
"discard"
-
Discards the resulting value of the render pass for this attachment.
17.1.1.4. Render Pass Layout
GPURenderPassLayout
declares the layout of the render targets of a GPURenderBundle
.It is also used internally to describe GPURenderPassEncoder
layouts and GPURenderPipeline
layouts.It determines compatibility between render passes, render bundles, and render pipelines.
dictionary : GPUObjectDescriptorBase {
GPURenderPassLayout required sequence <GPUTextureFormat?> colorFormats; GPUTextureFormat depthStencilFormat; GPUSize32 sampleCount = 1;};
colorFormats
, of typesequence<GPUTextureFormat?>
-
A list of the
GPUTextureFormat
s of the color attachments for this pass or bundle. depthStencilFormat
, of type GPUTextureFormat-
The
GPUTextureFormat
of the depth/stencil attachment for this pass or bundle. sampleCount
, of type GPUSize32, defaulting to1
-
Number of samples per pixel in the attachments for this pass or bundle.
Two GPURenderPassLayout
values are equal if:
-
Their
depthStencilFormat
andsampleCount
are equal, and -
Their
colorFormats
are equal ignoring any trailingnull
s.
derive render targets layout from pass
Arguments:
-
GPURenderPassDescriptor
descriptor
Returns: GPURenderPassLayout
-
Let layout be a new
GPURenderPassLayout
object. -
For each colorAttachment in descriptor.
colorAttachments
:-
If colorAttachment is not
null
:-
Set layout.
sampleCount
to colorAttachment.view
.[[texture]]
.sampleCount
. -
Append colorAttachment.
view
.[[descriptor]]
.format
to layout.colorFormats
.
-
-
Otherwise:
-
Append
null
to layout.colorFormats
.
-
-
-
Let depthStencilAttachment be descriptor.
depthStencilAttachment
,ornull
if not provided. -
If depthStencilAttachment is not
null
:-
Let view be depthStencilAttachment.
view
. -
Set layout.
sampleCount
to view.[[texture]]
.sampleCount
. -
Set layout.
depthStencilFormat
to view.[[descriptor]]
.format
.
-
-
Return layout.
derive render targets layout from pipeline
Arguments:
-
GPURenderPipelineDescriptor
descriptor
Returns: GPURenderPassLayout
-
Let layout be a new
GPURenderPassLayout
object. -
Set layout.
sampleCount
to descriptor.multisample
.count
. -
If descriptor.
depthStencil
is provided:-
Set layout.
depthStencilFormat
to descriptor.depthStencil
.format
.
-
-
If descriptor.
fragment
is provided:-
For each colorTarget in descriptor.
fragment
.targets
:-
Append colorTarget.
format
to layout.colorFormats
if colorTarget is notnull
, or appendnull
otherwise.
-
-
-
Return layout.
17.1.2. Finalization
The render pass encoder can be ended by calling end()
once the userhas finished recording commands for the pass. Once end()
has beencalled the render pass encoder can no longer be used.
end()
-
Completes recording of the render pass commands sequence.
Called on:
GPURenderPassEncoder
this.Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Let parentEncoder be this.
[[command_encoder]]
. -
If any of the following requirements are unmet, generate a validation error and stop.
-
this.
[[state]]
must be "open". -
parentEncoder.
[[state]]
must be "locked".
-
-
Set this.
[[state]]
to "ended". -
Set parentEncoder.
[[state]]
to "open". -
If any of the following requirements are unmet, make parentEncoder invalid and stop.
-
this must be valid.
-
this.
[[debug_group_stack]]
must be empty. -
this.
[[occlusion_query_active]]
must befalse
. -
this.
[[drawCount]]
must be ≤ this.[[maxDrawCount]]
.
-
-
Extend parentEncoder.
[[commands]]
with this.[[commands]]
. -
If this.
[[endTimestampWrite]]
is notnull
:-
Extend parentEncoder.
[[commands]]
with this.[[endTimestampWrite]]
.
-
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Perform the attachment stores/discards.
-
Let renderState be
null
.
-
17.2. GPURenderCommandsMixin
GPURenderCommandsMixin
defines rendering commands common to GPURenderPassEncoder
and GPURenderBundleEncoder
.
interface mixin GPURenderCommandsMixin {undefined setPipeline(GPURenderPipeline pipeline);undefined setIndexBuffer(GPUBuffer buffer, GPUIndexFormat indexFormat,optional GPUSize64 offset = 0,optional GPUSize64 size);undefined setVertexBuffer(GPUIndex32 slot, GPUBuffer? buffer,optional GPUSize64 offset = 0,optional GPUSize64 size);undefined draw(GPUSize32 vertexCount,optional GPUSize32 instanceCount = 1,optional GPUSize32 firstVertex = 0,optional GPUSize32 firstInstance = 0);undefined drawIndexed(GPUSize32 indexCount,optional GPUSize32 instanceCount = 1,optional GPUSize32 firstIndex = 0,optional GPUSignedOffset32 baseVertex = 0,optional GPUSize32 firstInstance = 0);undefined drawIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);undefined drawIndexedIndirect(GPUBuffer indirectBuffer, GPUSize64 indirectOffset);};
GPURenderCommandsMixin
assumes the presence of GPUObjectBase
, GPUCommandsMixin
, and GPUBindingCommandsMixin
members on the same object.It must only be included by interfaces which also include those mixins.
GPURenderCommandsMixin
has the following internal slots:
[[layout]]
, of typeGPURenderPassLayout
, readonly-
The layout of the render pass.
[[depthReadOnly]]
, of type boolean, readonly-
If
true
, indicates that the depth component is not modified. [[stencilReadOnly]]
, of type boolean, readonly-
If
true
, indicates that the stencil component is not modified. [[pipeline]]
, of typeGPURenderPipeline
-
The current
GPURenderPipeline
, initiallynull
. [[index_buffer]]
, of typeGPUBuffer
-
The current buffer to read index data from, initially
null
. [[index_format]]
, of typeGPUIndexFormat
-
The format of the index data in
[[index_buffer]]
. [[index_buffer_offset]]
, of typeGPUSize64
-
The offset in bytes of the section of
[[index_buffer]]
currently set. [[index_buffer_size]]
, of typeGPUSize64
-
The size in bytes of the section of
[[index_buffer]]
currently set,initially0
. [[vertex_buffers]]
, of type ordered map<slot,GPUBuffer
>-
The current
GPUBuffer
s to read vertex data from for each slot, initially empty. [[vertex_buffer_sizes]]
, of type ordered map<slot,GPUSize64
>-
The size in bytes of the section of
GPUBuffer
currently set for each slot, initiallyempty. [[drawCount]]
, of typeGPUSize64
-
The number of draw commands recorded in this encoder.
To Enqueue a render command on GPURenderCommandsMixin
encoder which issues the steps of a GPU Command command with RenderState renderState:
-
Append command to encoder.
[[commands]]
. -
When command is executed as part of a
GPUCommandBuffer
commandBuffer:-
Issue the steps of command with commandBuffer.
[[renderState]]
as renderState.
-
17.2.1. Drawing
setPipeline(pipeline)
-
Sets the current
GPURenderPipeline
.Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.setPipeline(pipeline) method. Parameter Type Nullable Optional Description pipeline
GPURenderPipeline
✘ ✘ The render pipeline to use for subsequent drawing commands. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
Let pipelineTargetsLayout be derive render targets layout from pipeline(pipeline.
[[descriptor]]
). -
If any of the following conditions are unsatisfied, make this invalid and stop.
-
pipeline is valid to use with this.
-
this.
[[layout]]
equals pipelineTargetsLayout. -
If pipeline.
[[writesDepth]]
: this.[[depthReadOnly]]
must befalse
. -
If pipeline.
[[writesStencil]]
: this.[[stencilReadOnly]]
must befalse
.
-
-
Set this.
[[pipeline]]
to be pipeline.
-
setIndexBuffer(buffer, indexFormat, offset, size)
-
Sets the current index buffer.
Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.setIndexBuffer(buffer, indexFormat, offset, size) method. Parameter Type Nullable Optional Description buffer
GPUBuffer
✘ ✘ Buffer containing index data to use for subsequent drawing commands. indexFormat
GPUIndexFormat
✘ ✘ Format of the index data contained in buffer. offset
GPUSize64
✘ ✔ Offset in bytes into buffer where the index data begins. Defaults to 0
.size
GPUSize64
✘ ✔ Size in bytes of the index data in buffer. Defaults to the size of the buffer minus the offset. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
If size is missing, set size to max(0, buffer.
size
- offset). -
If any of the following conditions are unsatisfied, make this invalid and stop.
-
buffer is valid to use with this.
-
buffer.
usage
containsINDEX
. -
offset is a multiple of indexFormat’s byte size.
-
offset + size ≤ buffer.
size
.
-
-
Add buffer to the usage scope as input.
-
Set this.
[[index_buffer]]
to be buffer. -
Set this.
[[index_format]]
to be indexFormat. -
Set this.
[[index_buffer_offset]]
to be offset. -
Set this.
[[index_buffer_size]]
to be size.
-
setVertexBuffer(slot, buffer, offset, size)
-
Sets the current vertex buffer for the given slot.
Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.setVertexBuffer(slot, buffer, offset, size) method. Parameter Type Nullable Optional Description slot
GPUIndex32
✘ ✘ The vertex buffer slot to set the vertex buffer for. buffer
GPUBuffer?
✔ ✘ Buffer containing vertex data to use for subsequent drawing commands. offset
GPUSize64
✘ ✔ Offset in bytes into buffer where the vertex data begins. Defaults to 0
.size
GPUSize64
✘ ✔ Size in bytes of the vertex data in buffer. Defaults to the size of the buffer minus the offset. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
Let bufferSize be 0 if buffer is
null
, or buffer.size
if not. -
If size is missing, set size to max(0, bufferSize - offset).
-
If any of the following requirements are unmet, make this invalid and stop.
-
slot must be < this.
[[device]]
.[[limits]]
.maxVertexBuffers
. -
offset must be a multiple of 4.
-
offset + size must be ≤ bufferSize.
-
-
If buffer is
null
:Otherwise:
-
If any of the following requirements are unmet, make this invalid and stop.
-
buffer must be valid to use with this.
-
buffer.
usage
must containVERTEX
.
-
-
Add buffer to the usage scope as input.
-
Set this.
[[vertex_buffers]]
[slot] to be buffer. -
Set this.
[[vertex_buffer_sizes]]
[slot] to be size.
-
-
draw(vertexCount, instanceCount, firstVertex, firstInstance)
-
Draws primitives.See § 23.3 Rendering for the detailed specification.
Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.draw(vertexCount, instanceCount, firstVertex, firstInstance) method. Parameter Type Nullable Optional Description vertexCount
GPUSize32
✘ ✘ The number of vertices to draw. instanceCount
GPUSize32
✘ ✔ The number of instances to draw. firstVertex
GPUSize32
✘ ✔ Offset into the vertex buffers, in vertices, to begin drawing from. firstInstance
GPUSize32
✘ ✔ First instance to draw. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw with this.
-
Let buffers be this.
[[pipeline]]
.[[descriptor]]
.vertex
.buffers
. -
For each
GPUIndex32
slot from0
to buffers.length (non-inclusive):-
If buffers[slot] is
null
, continue. -
Let bufferSize be this.
[[vertex_buffer_sizes]]
[slot]. -
Let stride be buffers[slot].
arrayStride
. -
Let lastStride be max(attribute.
offset
+ sizeof(attribute.format
))for each attribute in buffers[slot].attributes
. -
Let strideCount be computed based on buffers[slot].
stepMode
:"vertex"
-
firstVertex + vertexCount
"instance"
-
firstInstance + instanceCount
-
If strideCount ≠
0
-
Ensure (strideCount −
1
) × stride + lastStride ≤ bufferSize.
-
-
-
-
Increment this.
[[drawCount]]
by 1. -
Let passState be a snapshot of this’s current state.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Draw instanceCount instances, starting with instance firstInstance, ofprimitives consisting of vertexCount verticies, starting with vertex firstVertex,with the states from passState and renderState.
-
drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance)
-
Draws indexed primitives.See § 23.3 Rendering for the detailed specification.
Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.drawIndexed(indexCount, instanceCount, firstIndex, baseVertex, firstInstance) method. Parameter Type Nullable Optional Description indexCount
GPUSize32
✘ ✘ The number of indices to draw. instanceCount
GPUSize32
✘ ✔ The number of instances to draw. firstIndex
GPUSize32
✘ ✔ Offset into the index buffer, in indices, begin drawing from. baseVertex
GPUSignedOffset32
✘ ✔ Added to each index value before indexing into the vertex buffers. firstInstance
GPUSize32
✘ ✔ First instance to draw. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw indexed with this.
-
firstIndex + indexCount ≤ this.
[[index_buffer_size]]
÷ this.[[index_format]]
's byte size; -
Let buffers be this.
[[pipeline]]
.[[descriptor]]
.vertex
.buffers
. -
For each
GPUIndex32
slot from0
to buffers.length (non-inclusive):-
If buffers[slot] is
null
, continue. -
Let bufferSize be this.
[[vertex_buffer_sizes]]
[slot]. -
Let stride be buffers[slot].
arrayStride
. -
Let lastStride be max(attribute.
offset
+ sizeof(attribute.format
))for each attribute in buffers[slot].attributes
. -
Let strideCount be firstInstance + instanceCount.
-
If buffers[slot].
stepMode
is"instance"
and strideCount ≠0
:-
Ensure (strideCount −
1
) × stride + lastStride ≤ bufferSize.
-
-
-
-
Increment this.
[[drawCount]]
by 1. -
Let passState be a snapshot of this’s current state.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Draw instanceCount instances, starting with instance firstInstance, ofprimitives consisting of indexCount indexed verticies, starting with index firstIndex from vertex baseVertex,with the states from passState and renderState.
Note: a valid program should also never use vertex indices with
GPUVertexStepMode."vertex"
that are out of bounds. WebGPU implementations have different ways of handling this, and therefore a range of behaviors is allowed. Either the whole draw call is discarded, or the access to those attributes out of bounds is described by WGSL’s invalid memory reference. -
drawIndirect(indirectBuffer, indirectOffset)
-
Draws primitives using parameters read from a
GPUBuffer
.See § 23.3 Rendering for the detailed specification.The indirect draw parameters encoded in the buffer must be a tightlypacked block of four 32-bit unsigned integer values (16 bytes total), given in the sameorder as the arguments for
draw()
. For example:let drawIndirectParameters= new Uint32Array( 4 ); drawIndirectParameters[ 0 ] = vertexCount; drawIndirectParameters[ 1 ] = instanceCount; drawIndirectParameters[ 2 ] = firstVertex; drawIndirectParameters[ 3 ] = firstInstance; The value corresponding to
firstInstance
must be 0, unless the"indirect-first-instance"
feature is enabled. If the"indirect-first-instance"
feature is not enabled andfirstInstance
is not zero thedrawIndirect()
call will be treated as a no-op.Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.drawIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer
✘ ✘ Buffer containing the indirect draw parameters. indirectOffset
GPUSize64
✘ ✘ Offset in bytes into indirectBuffer where the drawing data begins. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw with this.
-
indirectBuffer is valid to use with this.
-
indirectBuffer.
usage
containsINDIRECT
. -
indirectOffset + sizeof(indirect draw parameters) ≤ indirectBuffer.
size
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as input.
-
Increment this.
[[drawCount]]
by 1. -
Let passState be a snapshot of this’s current state.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Let vertexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
-
Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 4) bytes.
-
Let firstVertex be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 8) bytes.
-
Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 12) bytes.
-
Draw instanceCount instances, starting with instance firstInstance, ofprimitives consisting of vertexCount verticies, starting with vertex firstVertex,with the states from passState and renderState.
-
drawIndexedIndirect(indirectBuffer, indirectOffset)
-
Draws indexed primitives using parameters read from a
GPUBuffer
.See § 23.3 Rendering for the detailed specification.The indirect drawIndexed parameters encoded in the buffer must be atightly packed block of five 32-bit unsigned integer values (20 bytes total), given inthe same order as the arguments for
drawIndexed()
. For example:let drawIndexedIndirectParameters= new Uint32Array( 5 ); drawIndexedIndirectParameters[ 0 ] = indexCount; drawIndexedIndirectParameters[ 1 ] = instanceCount; drawIndexedIndirectParameters[ 2 ] = firstIndex; drawIndexedIndirectParameters[ 3 ] = baseVertex; drawIndexedIndirectParameters[ 4 ] = firstInstance; The value corresponding to
firstInstance
must be 0, unless the"indirect-first-instance"
feature is enabled. If the"indirect-first-instance"
feature is not enabled andfirstInstance
is not zero thedrawIndexedIndirect()
call will be treated as a no-op.Called on:
GPURenderCommandsMixin
this.Arguments:
Arguments for the GPURenderCommandsMixin.drawIndexedIndirect(indirectBuffer, indirectOffset) method. Parameter Type Nullable Optional Description indirectBuffer
GPUBuffer
✘ ✘ Buffer containing the indirect drawIndexed parameters. indirectOffset
GPUSize64
✘ ✘ Offset in bytes into indirectBuffer where the drawing data begins. Returns:
undefined
Issue the following steps on the Device timeline of this.
[[device]]
:-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
It is valid to draw indexed with this.
-
indirectBuffer is valid to use with this.
-
indirectBuffer.
usage
containsINDIRECT
. -
indirectOffset + sizeof(indirect drawIndexed parameters) ≤ indirectBuffer.
size
. -
indirectOffset is a multiple of 4.
-
-
Add indirectBuffer to the usage scope as input.
-
Increment this.
[[drawCount]]
by 1. -
Let passState be a snapshot of this’s current state.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Let indexCount be an unsigned 32-bit integer read from indirectBuffer at indirectOffset bytes.
-
Let instanceCount be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 4) bytes.
-
Let firstIndex be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 8) bytes.
-
Let baseVertex be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 12) bytes.
-
Let firstInstance be an unsigned 32-bit integer read from indirectBuffer at(indirectOffset + 16) bytes.
-
Draw instanceCount instances, starting with instance firstInstance, ofprimitives consisting of indexCount indexed verticies, starting with index firstIndex from vertex baseVertex,with the states from passState and renderState.
-
To determine if it’s valid to draw with GPURenderCommandsMixin
encoder run the following steps:
-
If any of the following conditions are unsatisfied, return
false
:-
Validate encoder bind groups(encoder, encoder.
[[pipeline]]
)must betrue
. -
Let pipelineDescriptor be encoder.
[[pipeline]]
.[[descriptor]]
. -
For each
GPUIndex32
slot0
to pipelineDescriptor.vertex
.buffers
.length:-
If pipelineDescriptor.
vertex
.buffers
[slot] is notnull
, encoder.[[vertex_buffers]]
must contain slot.
-
-
Validate
maxBindGroupsPlusVertexBuffers
:-
Let bindGroupSpaceUsed be(the maximum key in encoder.
[[bind_groups]]
) + 1. -
Let vertexBufferSpaceUsed be(the maximum key in encoder.
[[vertex_buffers]]
) + 1. -
bindGroupSpaceUsed + vertexBufferSpaceUsed must be ≤ encoder.
[[device]]
.[[limits]]
.maxBindGroupsPlusVertexBuffers
.
-
-
-
Otherwise return
true
.
To determine if it’s valid to draw indexed with GPURenderCommandsMixin
encoder run the following steps:
-
If any of the following conditions are unsatisfied, return
false
:-
It must be valid to draw with encoder.
-
encoder.
[[index_buffer]]
must not benull
. -
Let topology be encoder.
[[pipeline]]
.[[descriptor]]
.primitive
.topology
. -
If topology is
"line-strip"
or"triangle-strip"
:-
encoder.
[[index_format]]
must equal encoder.[[pipeline]]
.[[descriptor]]
.primitive
.stripIndexFormat
.
-
-
-
Otherwise return
true
.
17.2.2. Rasterization state
The GPURenderPassEncoder
has several methods which affect how draw commands are rasterized toattachments used by this encoder.
setViewport(x, y, width, height, minDepth, maxDepth)
-
Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.
Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setViewport(x, y, width, height, minDepth, maxDepth) method. Parameter Type Nullable Optional Description x
float
✘ ✘ Minimum X value of the viewport in pixels. y
float
✘ ✘ Minimum Y value of the viewport in pixels. width
float
✘ ✘ Width of the viewport in pixels. height
float
✘ ✘ Height of the viewport in pixels. minDepth
float
✘ ✘ Minimum depth value of the viewport. maxDepth
float
✘ ✘ Maximum depth value of the viewport. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
x ≥
0
-
y ≥
0
-
width ≥
0
-
height ≥
0
-
x + width ≤ this.
[[attachment_size]]
.width -
y + height ≤ this.
[[attachment_size]]
.height -
0.0 ≤ minDepth ≤ 1.0
-
0.0 ≤ maxDepth ≤ 1.0
-
minDepth < maxDepth
-
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Round x, y, width, and height to some uniform precision, no less precise than integer rounding.
-
Set renderState.
[[viewport]]
to the extents x, y, width, height, minDepth, and maxDepth.
-
setScissorRect(x, y, width, height)
-
Sets the scissor rectangle used during the rasterization stage.After transformation into viewport coordinates any fragments which fall outside the scissorrectangle will be discarded.
Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setScissorRect(x, y, width, height) method. Parameter Type Nullable Optional Description x
GPUIntegerCoordinate
✘ ✘ Minimum X value of the scissor rectangle in pixels. y
GPUIntegerCoordinate
✘ ✘ Minimum Y value of the scissor rectangle in pixels. width
GPUIntegerCoordinate
✘ ✘ Width of the scissor rectangle in pixels. height
GPUIntegerCoordinate
✘ ✘ Height of the scissor rectangle in pixels. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
x+width ≤ this.
[[attachment_size]]
.width. -
y+height ≤ this.
[[attachment_size]]
.height.
-
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Set renderState.
[[scissorRect]]
to the extents x, y, width, and height.
-
setBlendConstant(color)
-
Sets the constant blend color and alpha values used with
"constant"
and"one-minus-constant"
GPUBlendFactor
s.Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setBlendConstant(color) method. Parameter Type Nullable Optional Description color
GPUColor
✘ ✘ The color to use when blending. Returns:
undefined
Content timeline steps:
-
? validate GPUColor shape(color).
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Set renderState.
[[blendConstant]]
to color.
-
setStencilReference(reference)
-
Sets the
[[stencilReference]]
value used during stencil tests withthe"replace"
GPUStencilOperation
.Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.setStencilReference(reference) method. Parameter Type Nullable Optional Description reference
GPUStencilValue
✘ ✘ The new stencil reference value. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Set renderState.
[[stencilReference]]
to reference.
-
17.2.3. Queries
beginOcclusionQuery(queryIndex)
-
Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.beginOcclusionQuery(queryIndex) method. Parameter Type Nullable Optional Description queryIndex
GPUSize32
✘ ✘ The index of the query in the query set. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
this.
[[occlusion_query_set]]
is notnull
. -
queryIndex < this.
[[occlusion_query_set]]
.count
. -
The query at same queryIndex must not have been previously written to in this pass.
-
this.
[[occlusion_query_active]]
isfalse
.
-
-
Set this.
[[occlusion_query_active]]
totrue
. -
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Set renderState.
[[occlusionQueryIndex]]
to queryIndex.
-
endOcclusionQuery()
-
Called on:
GPURenderPassEncoder
this.Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
this.
[[occlusion_query_active]]
istrue
.
-
-
Set this.
[[occlusion_query_active]]
tofalse
. -
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
Let passingFragments be non-zero if any fragment samples passed all per-fragmenttests since the corresponding
beginOcclusionQuery()
command was executed, and zero otherwise.Note: If no draw calls occurred, passingFragments is zero.
-
Write passingFragments into this.
[[occlusion_query_set]]
at index renderState.[[occlusionQueryIndex]]
.
-
17.2.4. Bundles
executeBundles(bundles)
-
Executes the commands previously recorded into the given
GPURenderBundle
s as part ofthis render pass.When a
GPURenderBundle
is executed, it does not inherit the render pass’s pipeline, bindgroups, or vertex and index buffers. After aGPURenderBundle
has executed, the renderpass’s pipeline, bind group, and vertex/index buffer state is cleared(to the initial, empty values).Note: The state is cleared, not restored to the previous state.This occurs even if zero
GPURenderBundles
are executed.Called on:
GPURenderPassEncoder
this.Arguments:
Arguments for the GPURenderPassEncoder.executeBundles(bundles) method. Parameter Type Nullable Optional Description bundles
sequence<GPURenderBundle>
✘ ✘ List of render bundles to execute. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
[[device]]
.
Device timeline steps:
-
Validate the encoder state of this. If it returns false, stop.
-
If any of the following conditions are unsatisfied, make this invalid and stop.
-
For each bundle in bundles:
-
bundle must be valid to use with this.
-
this.
[[layout]]
must equal bundle.[[layout]]
. -
If this.
[[depthReadOnly]]
is true, bundle.[[depthReadOnly]]
must be true. -
If this.
[[stencilReadOnly]]
is true, bundle.[[stencilReadOnly]]
must be true.
-
-
-
For each bundle in bundles:
-
Increment this.
[[drawCount]]
by bundle.[[drawCount]]
.
-
-
Clear this.
[[bind_groups]]
. -
Set this.
[[pipeline]]
tonull
. -
Set this.
[[index_buffer]]
tonull
. -
Clear this.
[[vertex_buffers]]
. -
Let passState be a snapshot of this’s current state.
-
Enqueue a render command on this which issues the subsequent steps on the Queue timeline with renderState when executed.
Queue timeline steps:
-
For each bundle in bundles:
-
Execute each command in bundle.
[[command_list]]
with passState and renderState.Note: renderState cannot be changed by executing render bundles.Also note, no mutable passState state is visible to render bundles.
-
-
18. Bundles
A bundle is a partial, limited pass that is encoded once and can then be executed multiple times aspart of future pass encoders without expiring after use like typical command buffers. This canreduce the overhead of encoding and submission of commands which are issued repeatedly withoutchanging.
18.1. GPURenderBundle
[Exposed =(Window ,Worker ),SecureContext ]interface GPURenderBundle {};GPURenderBundleincludes GPUObjectBase;
[[command_list]]
, of type list<GPU command>-
A list of GPU commands to be submitted to the
GPURenderPassEncoder
when theGPURenderBundle
is executed. [[layout]]
, of typeGPURenderPassLayout
-
The layout of the render bundle.
[[depthReadOnly]]
, of type boolean-
If
true
, indicates that the depth component is not modified by executing this render bundle. [[stencilReadOnly]]
, of type boolean-
If
true
, indicates that the stencil component is not modified by executing this render bundle. [[drawCount]]
, of typeGPUSize64
-
The number of draw commands in this
GPURenderBundle
.
18.1.1. Render Bundle Creation
dictionary : GPUObjectDescriptorBase {};
GPURenderBundleDescriptor
[Exposed =(Window ,Worker ),SecureContext ]interface { GPURenderBundle finish(
GPURenderBundleEncoder optional GPURenderBundleDescriptor descriptor = {});};GPURenderBundleEncoderincludes GPUObjectBase;GPURenderBundleEncoderincludes GPUCommandsMixin;GPURenderBundleEncoderincludes GPUDebugCommandsMixin;GPURenderBundleEncoderincludes GPUBindingCommandsMixin;GPURenderBundleEncoderincludes GPURenderCommandsMixin;
createRenderBundleEncoder(descriptor)
-
Creates a
GPURenderBundleEncoder
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createRenderBundleEncoder(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderBundleEncoderDescriptor
✘ ✘ Description of the GPURenderBundleEncoder
to create.Returns:
GPURenderBundleEncoder
Content timeline steps:
-
? Validate texture format required features of each non-
null
element of descriptor.colorFormats
with this.[[device]]
. -
? Validate texture format required features of descriptor.
depthStencilFormat
with this.[[device]]
. -
Let e be a new
GPURenderBundleEncoder
object. -
Issue the initialization steps on the Device timeline of this.
-
Return e.
Device timeline initialization steps:
-
If any of the following conditions are unsatisfied generate a validation error, make e invalid, and stop.
-
this is valid.
-
descriptor.
colorFormats
.length must be ≤ this.[[limits]]
.maxColorAttachments
. -
For each non-
null
colorFormat in descriptor.colorFormats
:-
colorFormat must be a color renderable format.
-
-
Calculating color attachment bytes per sample(descriptor.
colorFormats
)must be ≤ this.[[limits]]
.maxColorAttachmentBytesPerSample
. -
If descriptor.
depthStencilFormat
is provided:-
descriptor.
depthStencilFormat
must be a depth-or-stencil format.
-
-
There must exist at least one attachment, either:
-
A non-
null
value in descriptor.colorFormats
, or -
A descriptor.
depthStencilFormat
.
-
-
-
Set e.
[[layout]]
to a copy of descriptor’s includedGPURenderPassLayout
interface. -
Set e.
[[depthReadOnly]]
to descriptor.depthReadOnly
. -
Set e.
[[stencilReadOnly]]
to descriptor.stencilReadOnly
. -
Set e.
[[state]]
to "open". -
Set e.
[[drawCount]]
to 0.
Describe the reset of the steps for
createRenderBundleEncoder()
. -
18.1.2. Encoding
dictionary : GPURenderPassLayout {
GPURenderBundleEncoderDescriptor boolean depthReadOnly =false ;boolean stencilReadOnly =false ;};
depthReadOnly
, of type boolean, defaulting tofalse
-
If
true
, indicates that the render bundle does not modify the depth component of theGPURenderPassDepthStencilAttachment
of any render pass the render bundle is executedin. stencilReadOnly
, of type boolean, defaulting tofalse
-
If
true
, indicates that the render bundle does not modify the stencil component of theGPURenderPassDepthStencilAttachment
of any render pass the render bundle is executedin.
18.1.3. Finalization
finish(descriptor)
-
Completes recording of the render bundle commands sequence.
Called on:
GPURenderBundleEncoder
this.Arguments:
Arguments for the GPURenderBundleEncoder.finish(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPURenderBundleDescriptor
✘ ✔ Returns:
GPURenderBundle
Content timeline steps:
-
Let renderBundle be a new
GPURenderBundle
. -
Issue the finish steps on the Device timeline of this.
[[device]]
. -
Return renderBundle.
Device timeline finish steps:
-
Let validationSucceeded be
true
if all of the following requirements are met, andfalse
otherwise.-
this must be valid.
-
this.
[[state]]
must be "open". -
this.
[[debug_group_stack]]
must be empty. -
Every usage scope contained in this must satisfy the usage scope validation.
-
-
Set this.
[[state]]
to "ended". -
If validationSucceeded is
false
, then:-
Generate a validation error.
-
Return a new invalid
GPURenderBundle
.
-
-
Set renderBundle.
[[command_list]]
to this.[[commands]]
. -
Set renderBundle.
[[drawCount]]
to this.[[drawCount]]
.
-
19. Queues
19.1. GPUQueueDescriptor
GPUQueueDescriptor
describes a queue request.
dictionary GPUQueueDescriptor : GPUObjectDescriptorBase {};
19.2. GPUQueue
[Exposed =(Window ,Worker ),SecureContext ]interface GPUQueue {undefined submit(sequence <GPUCommandBuffer> commandBuffers);Promise <undefined > onSubmittedWorkDone();undefined writeBuffer( GPUBuffer buffer, GPUSize64 bufferOffset, data,optional GPUSize64 dataOffset = 0,optional GPUSize64 size);undefined writeTexture( GPUImageCopyTexture destination, data, GPUImageDataLayout dataLayout, GPUExtent3D size);undefined copyExternalImageToTexture( GPUImageCopyExternalImage source, GPUImageCopyTextureTagged destination, GPUExtent3D copySize);};GPUQueueincludes GPUObjectBase;
GPUQueue
has the following methods:
writeBuffer(buffer, bufferOffset, data, dataOffset, size)
-
Issues a write operation of the provided data into a
GPUBuffer
.Called on:
GPUQueue
this.Arguments:
Arguments for the GPUQueue.writeBuffer(buffer, bufferOffset, data, dataOffset, size) method. Parameter Type Nullable Optional Description buffer
GPUBuffer
✘ ✘ The buffer to write to. bufferOffset
GPUSize64
✘ ✘ Offset in bytes into buffer to begin writing at. data
✘ ✘ Data to write into buffer. dataOffset
GPUSize64
✘ ✔ Offset in into data to begin writing from. Given in elements if data is a TypedArray
and bytes otherwise.size
GPUSize64
✘ ✔ Size of content to write from data to buffer. Given in elements if data is a TypedArray
and bytes otherwise.Returns:
undefined
Content timeline steps:
-
If data is an
ArrayBuffer
orDataView
, let the element type be "byte".Otherwise, data is a TypedArray; let the element type be the type of the TypedArray. -
Let dataSize be the size of data, in elements.
-
If size is missing,let contentsSize be dataSize − dataOffset.Otherwise, let contentsSize be size.
-
If any of the following conditions are unsatisfied,throw
OperationError
and stop.-
contentsSize ≥ 0.
-
dataOffset + contentsSize ≤ dataSize.
-
contentsSize, converted to bytes, is a multiple of 4 bytes.
-
-
Let dataContents be a copy of the bytes held by the buffer source data.
-
Let contents be the contentsSize elements of dataContents starting atan offset of dataOffset elements.
-
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:
-
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
buffer is valid to use with this.
-
buffer.
[[internals]]
.state is "available". -
buffer.
usage
includesCOPY_DST
. -
bufferOffset, converted to bytes, is a multiple of 4 bytes.
-
bufferOffset + contentsSize, converted to bytes, ≤ buffer.
size
bytes.
-
-
Write contents into buffer starting at bufferOffset.
-
writeTexture(destination, data, dataLayout, size)
-
Issues a write operation of the provided data into a
GPUTexture
.Called on:
GPUQueue
this.Arguments:
Arguments for the GPUQueue.writeTexture(destination, data, dataLayout, size) method. Parameter Type Nullable Optional Description destination
GPUImageCopyTexture
✘ ✘ The texture subresource and origin to write to. data
✘ ✘ Data to write into destination. dataLayout
GPUImageDataLayout
✘ ✘ Layout of the content in data. size
GPUExtent3D
✘ ✘ Extents of the content to write from data to destination. Returns:
undefined
Content timeline steps:
-
? validate GPUOrigin3D shape(destination.
origin
). -
? validate GPUExtent3D shape(size).
-
Let dataBytes be a copy of the bytes held by the buffer source data.
-
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:
-
Let texture be destination.
texture
. -
If any of the following conditions are unsatisfied, generate a validation error and stop.
-
validating GPUImageCopyTexture(destination, size) returns
true
. -
texture.
usage
includesCOPY_DST
. -
texture.
sampleCount
is 1. -
validating texture copy range(destination, size) return
true
. -
destination.
aspect
must refer to a single aspect of texture.format
. -
That aspect must be a valid image copy destination according to § 26.1.2 Depth-stencil formats.
-
Let aspectSpecificFormat = texture.
format
. -
If texture.
format
is a depth-or-stencil format:-
Set aspectSpecificFormat to the aspect-specific format of texture.
format
according to § 26.1.2 Depth-stencil formats.
-
-
validating linear texture data(dataLayout, dataBytes.length, aspectSpecificFormat, size) succeeds.
Note: unlike
GPUCommandEncoder
.copyBufferToTexture()
, there is no alignment requirement on either dataLayout.bytesPerRow
or dataLayout.offset
. -
-
Let contents be the contents of the images seen byviewing dataBytes with dataLayout and size.
Specify more formally.
Note: This is described as copying all of data to the device timeline,but in practice data could be much larger than necessary.Implementations should optimize by copying only the necessary bytes.
-
Issue the subsequent steps on the Queue timeline of this.
Queue timeline steps:
-
Write contents into destination.
Define copy, including provision for snorm.
-
copyExternalImageToTexture(source, destination, copySize)
-
Issues a copy operation of the contents of a platform image/canvasinto the destination texture.
This operation performs color encoding into the destinationencoding according to the parameters of
GPUImageCopyTextureTagged
.Copying into a
-srgb
texture results in the same texture bytes, not the same decodedvalues, as copying into the corresponding non--srgb
format.Thus, after a copy operation, sampling the destination texture hasdifferent results depending on whether its format is-srgb
, all else unchanged.NOTE:
When copying from a
"webgl"
/"webgl2"
context canvas, the WebGL Drawing Buffer may be not exist during certain points in the frame presentation cycle (after the image has been moved to the compositor for display). To avoid this, either:-
Issue
copyExternalImageToTexture()
in the same task withWebGL rendering operation, to ensure the copy occurs before the WebGLcanvas is presented. -
If not possible, set the
preserveDrawingBuffer
option inWebGLContextAttributes
totrue
, so that the drawing buffer willstill contain a copy of the frame contents after they’ve been presented.Note, this extra copy may have a performance cost.
Called on:
GPUQueue
this.Arguments:
Arguments for the GPUQueue.copyExternalImageToTexture(source, destination, copySize) method. Parameter Type Nullable Optional Description source
GPUImageCopyExternalImage
✘ ✘ source image and origin to copy to destination. destination
GPUImageCopyTextureTagged
✘ ✘ The texture subresource and origin to write to, and its encoding metadata. copySize
GPUExtent3D
✘ ✘ Extents of the content to write from source to destination. Returns:
undefined
Content timeline steps:
-
? validate GPUOrigin2D shape(source.
origin
). -
? validate GPUOrigin3D shape(destination.
origin
). -
? validate GPUExtent3D shape(copySize).
-
Let sourceImage be source.
source
-
If sourceImage is not origin-clean,throw a
SecurityError
and stop. -
If any of the following requirements are unmet, throw an
OperationError
and stop.-
source.origin.x + copySize.width must be ≤ the width of sourceImage.
-
source.origin.y + copySize.height must be ≤ the height of sourceImage.
-
source.origin.z + copySize.depthOrArrayLayers must be ≤ 1.
-
-
Let usability be ? check the usability of the image argument(source).
-
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:
-
Let texture be destination.
texture
. -
If any of the following requirements are unmet, generate a validation error and stop.
-
usability must be
good
. -
destination.
texture
must be valid to use with this. -
validating GPUImageCopyTexture(destination, copySize) must return
true
. -
validating texture copy range(destination, copySize) must return
true
. -
texture.
usage
must include bothRENDER_ATTACHMENT
andCOPY_DST
. -
texture.
dimension
must be"2d"
. -
texture.
sampleCount
must be 1. -
texture.
format
must be one of the followingformats (which all supportRENDER_ATTACHMENT
usage):-
"r8unorm"
-
"r16float"
-
"r32float"
-
"rg8unorm"
-
"rg16float"
-
"rg32float"
-
"rgba8unorm"
-
"rgba8unorm-srgb"
-
"bgra8unorm"
-
"bgra8unorm-srgb"
-
"rgb10a2unorm"
-
"rgba16float"
-
"rgba32float"
-
-
-
Do the actual copy.
-
submit(commandBuffers)
-
Schedules the execution of the command buffers by the GPU on this queue.
Submitted command buffers cannot be used again.
Called on:
GPUQueue
this.Arguments:
Arguments for the GPUQueue.submit(commandBuffers) method. Parameter Type Nullable Optional Description commandBuffers
sequence<GPUCommandBuffer>
✘ ✘ Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this:
Device timeline steps:
-
If any of the following requirements are unmet, generate a validation error and stop.
-
Every
GPUCommandBuffer
in commandBuffers must be valid to use with this. -
For each of the following types of resources used by any command in anyelement of commandBuffers:
GPUBuffer
b-
b.
[[internals]]
.state mustbe "available". GPUTexture
t-
t.
[[destroyed]]
must befalse
. GPUExternalTexture
et-
et.
[[expired]]
must befalse
. GPUQuerySet
qs-
qs must be in the available state.For occlusion queries, the
occlusionQuerySet
inbeginRenderPass()
is not "used" unlessit is also used bybeginOcclusionQuery()
.
-
-
For each commandBuffer in commandBuffers:
-
Make commandBuffer invalid.
-
-
Issue the subsequent steps on the Queue timeline of this:
Queue timeline steps:
-
For each commandBuffer in commandBuffers:
-
Execute each command in commandBuffer.
[[command_list]]
.
-
-
onSubmittedWorkDone()
-
Returns a
Promise
that resolves once this queue finishes processing all the work submittedup to this moment.Resolution of this
Promise
implies the completion ofmapAsync()
calls made prior to that call,onGPUBuffer
s last used exclusively on that queue.Called on:
GPUQueue
this.Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the synchronization steps on the Device timeline of this.
-
Return promise.
Device timeline synchronization steps:
-
When the device timeline becomes informed of the completion of all currently-enqueued operations on this, orif this is lost, or when this becomes lost:
-
Issue the subsequent steps on contentTimeline.
-
Content timeline steps:
-
Resolve promise.
-
20. Queries
20.1. GPUQuerySet
[Exposed =(Window ,Worker ),SecureContext ]interface GPUQuerySet {undefined destroy();readonly attribute GPUQueryType type;readonly attribute GPUSize32Out count;};GPUQuerySetincludes GPUObjectBase;
GPUQuerySet
has the following attributes:
type
, of type GPUQueryType, readonly-
The type of the queries managed by this
GPUQuerySet
. count
, of type GPUSize32Out, readonly-
The number of queries managed by this
GPUQuerySet
.
GPUQuerySet
has the following internal slots:
[[state]]
, of type query set state-
The current state of the
GPUQuerySet
.
Each GPUQuerySet
has a current query set state on the Device timeline which is one of the following:
- "available"
-
The
GPUQuerySet
is available for GPU operations on its content. - "destroyed"
-
The
GPUQuerySet
is no longer available for any operations exceptdestroy
.
20.1.1. QuerySet Creation
A GPUQuerySetDescriptor
specifies the options to use in creating a GPUQuerySet
.
dictionary : GPUObjectDescriptorBase {
GPUQuerySetDescriptor required GPUQueryType type;required GPUSize32 count;};
type
, of type GPUQueryType-
The type of queries managed by
GPUQuerySet
. count
, of type GPUSize32-
The number of queries managed by
GPUQuerySet
.
createQuerySet(descriptor)
-
Creates a
GPUQuerySet
.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.createQuerySet(descriptor) method. Parameter Type Nullable Optional Description descriptor
GPUQuerySetDescriptor
✘ ✘ Description of the GPUQuerySet
to create.Returns:
GPUQuerySet
Content timeline steps:
-
If descriptor.
type
is"timestamp"
,but"timestamp-query"
is not enabled for this:-
Throw a
TypeError
.
-
-
Let q be a new
GPUQuerySet
object. -
Set q.
type
to descriptor.type
. -
Set q.
count
to descriptor.count
. -
Issue the initialization steps on the Device timeline of this.
-
Return q.
Device timeline initialization steps:
-
If any of the following requirements are unmet, generate a validation error,make q invalid, and stop.
-
this is valid.
-
descriptor.
count
must be ≤ 4096.
-
-
Set q.
[[state]]
to available.
-
Creating a GPUQuerySet
which holds 32 occlusion query results.
const querySet= gpuDevice. createQuerySet({ type: 'occlusion' , count: 32 });
20.1.2. QuerySet Destruction
An application that no longer requires a GPUQuerySet
can choose to lose access to it beforegarbage collection by calling destroy()
.
destroy()
-
Destroys the
GPUQuerySet
.Called on:
GPUQuerySet
this.Returns:
undefined
Content timeline steps:
-
Set this.
[[state]]
to destroyed.
-
20.2. QueryType
enum {
GPUQueryType ,
"occlusion" ,};
"timestamp"
20.3. Occlusion Query
Occlusion query is only available on render passes, to query the number of fragment samples that passall the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha tocoverage, stencil, and depth tests. Any non-zero result value for the query indicates that at leastone sample passed the tests and reached the output merging stage of the render pipeline, 0 indicatesthat no samples passed the tests.
When beginning a render pass, GPURenderPassDescriptor
.occlusionQuerySet
must be set to be able to use occlusion queries during the pass. An occlusion query is begunand ended by calling beginOcclusionQuery()
and endOcclusionQuery()
in pairs that cannot be nested, and resolved into a GPUBuffer
as 64-bit unsigned integer by GPUCommandEncoder
.resolveQuerySet()
.
20.4. Timestamp Query
Timestamp queries allow applications to write timestamps to a GPUQuerySet
, using:
-
GPUComputePassDescriptor
.timestampWrites
-
GPURenderPassDescriptor
.timestampWrites
and then resolve timestamp values (in nanoseconds as a 64-bit unsigned integer) intoa GPUBuffer
, using GPUCommandEncoder
.resolveQuerySet()
.
Timestamp values are implementation defined and may not increase monotonically. The physical devicemay reset the timestamp counter occasionally, which can result in unexpected values such as negativedeltas between timestamps that logically should be monotonically increasing. These instances shouldbe rare and can safely be ignored. Applications should not be written in such a way that unexpectedtimestamps cause an application failure.
Timestamp queries are implemented using high-resolution timers (see § 2.1.7.2 Device/queue-timeline timing).To mitigate security and privacy concerns, their precision must be reduced:
To get the current queue timestamp:
-
Let fineTimestamp be the current timestamp value of the current queue timeline,in nanoseconds, relative to an implementation-defined point in the past.
-
Return the result of calling coarsen time on fineTimestamp.
Note: Since cross-origin isolation may not apply to the device timeline or queue timeline, crossOriginIsolatedCapability
is never set to true
.
Validate timestampWrites(GPUDevice
device, (
timestampWrites) GPUComputePassTimestampWrites
or GPURenderPassTimestampWrites
)
Return true
if the following requirements are met, and false
if not.
-
"timestamp-query"
must be enabled for device. -
timestampWrites.
querySet
must be valid to use with device. -
timestampWrites.
querySet
.type
must be"timestamp"
. -
Of the write index members in timestampWrites (
beginningOfPassWriteIndex
,endOfPassWriteIndex
):
21. Canvas Rendering
21.1. HTMLCanvasElement.getContext()
A GPUCanvasContext
object is created via the getContext()
method of an HTMLCanvasElement
instance by passing the string literal 'webgpu'
as its contextType
argument.
Get a GPUCanvasContext
from an offscreen HTMLCanvasElement
:
const canvas= document. createElement( 'canvas' ); const context= canvas. getContext( 'webgpu' );
Unlike WebGL or 2D context creation, the second argument of HTMLCanvasElement.getContext()
or OffscreenCanvas.getContext()
,the context creation attribute dictionary options
, is ignored.Instead, use GPUCanvasContext.configure()
,which allows changing the canvas configuration without replacing the canvas.
To create a 'webgpu' context on a canvas (HTMLCanvasElement
or OffscreenCanvas
) canvas:
-
Let context be a new
GPUCanvasContext
. -
Set context.
canvas
to canvas. -
Replace the drawing buffer of context.
-
Return context.
Note: User agents should consider issuing developer-visible warnings when an ignored options
argument is provided when calling getContext()
to get a WebGPU canvas context.
21.2. GPUCanvasContext
[Exposed =(Window ,Worker ),SecureContext ]interface {
GPUCanvasContext readonly attribute (HTMLCanvasElement or OffscreenCanvas ) canvas;undefined configure(GPUCanvasConfiguration configuration);undefined unconfigure(); GPUTexture getCurrentTexture();};
GPUCanvasContext
has the following attributes:
canvas
, of type(HTMLCanvasElement or OffscreenCanvas)
, readonly-
The canvas this context was created from.
GPUCanvasContext
has the following internal slots:
[[configuration]]
, of typeGPUCanvasConfiguration
?, initiallynull
-
The options this context is currently configured with.
null
if the context has not been configured or has beenunconfigured
. [[textureDescriptor]]
, of typeGPUTextureDescriptor
?, initiallynull
-
The currently configured texture descriptor, derived from the
[[configuration]]
and canvas.null
if the context has not been configured or has beenunconfigured
. [[drawingBuffer]]
, an image, initiallya transparent black image with the same size as the canvas-
The drawing buffer is the working-copy image data of the canvas.It is exposed as writable by
[[currentTexture]]
(returned bygetCurrentTexture()
).The drawing buffer is used to get a copy of the image contents of a context, whichoccurs when the canvas is displayed or otherwise read. It may be transparent, even if
[[configuration]]
.alphaMode
is"opaque"
. ThealphaMode
only affects theresult of the "get a copy of the image contents of a context" algorithm.The drawing buffer outlives the
[[currentTexture]]
and contains thepreviously-rendered contents even after the canvas has been presented.It is only cleared in Replace the drawing buffer.Any time the drawing buffer is read, implementations must ensure that all previouslysubmitted work (e.g. queue submissions) have completed writing to it via
[[currentTexture]]
. [[currentTexture]]
, of typeGPUTexture
?, initiallynull
-
The
GPUTexture
to draw into for the current frame.It exposes a writable view onto the underlying[[drawingBuffer]]
.getCurrentTexture()
populates this slot ifnull
, then returns it.In the steady-state of a visible canvas, any changes to the drawing buffer made through thecurrentTexture get presented when updating the rendering of a WebGPU canvas.At or before that point, the texture is also destroyedand
[[currentTexture]]
is set to tonull
, signalling thata new one is to be created by the next call togetCurrentTexture()
.Destroying
the currentTexture has no effect on the drawing buffercontents; it only terminates write-access to the drawing buffer early.During the same frame,getCurrentTexture()
continues returning thesame destroyed texture.Expire the current texture sets the currentTexture to
null
.It is called byconfigure()
, resizing the canvas,presentation,transferToImageBitmap()
, and others.
GPUCanvasContext
has the following methods:
configure(configuration)
-
Configures the context for this canvas.This clears the drawing buffer to transparent black (in Replace the drawing buffer).
Called on:
GPUCanvasContext
this.Arguments:
Arguments for the GPUCanvasContext.configure(configuration) method. Parameter Type Nullable Optional Description configuration
GPUCanvasConfiguration
✘ ✘ Desired configuration for the context. Returns: undefined
Content timeline steps:
-
Let device be configuration.
device
. -
? Validate texture format required features of configuration.
format
with device.[[device]]
. -
? Validate texture format required features of each element of configuration.
viewFormats
with device.[[device]]
. -
Let descriptor be the GPUTextureDescriptor for the canvas and configuration(this.
canvas
, configuration). -
Set this.
[[configuration]]
to configuration. -
Set this.
[[textureDescriptor]]
to descriptor. -
Replace the drawing buffer of this, which resets this.
[[drawingBuffer]]
with a bitmap with the new format and tags. -
Issue the subsequent steps on the Device timeline of device.
Device timeline steps:
-
If any of the following requirements are unmet, generate a validation error and stop.
-
validating GPUTextureDescriptor(device, descriptor)must return true.
-
Supported context formats must contain configuration.
format
.
Note: This early validation remains valid until the next
configure()
call, except forvalidation of thesize
, which changes whenthe canvas is resized. -
-
unconfigure()
-
Removes the context configuration. Destroys any textures produced while configured.
Called on:
GPUCanvasContext
this.Returns: undefined
Content timeline steps:
-
Set this.
[[configuration]]
tonull
. -
Set this.
[[textureDescriptor]]
tonull
. -
Replace the drawing buffer of this.
-
getCurrentTexture()
-
Get the
GPUTexture
that will be composited to the document by theGPUCanvasContext
next.NOTE:
An application should call
getCurrentTexture()
in the same task that renders to the canvas texture. Otherwise, the texture could get destroyed by these steps before the application is finished rendering to it.The expiry task (defined below) is optional to implement. Even if implemented, task source priority is not normatively defined, so may happen as early as the next task, or as late as after all other task sources are empty (see automatic expiry task source). Expiry is only guaranteed when a visible canvas is displayed (updating the rendering of a WebGPU canvas) and in other callers of Replace the drawing buffer.
Called on:
GPUCanvasContext
this.Returns:
GPUTexture
Content timeline steps:
-
If this.
[[configuration]]
isnull
:-
Throw an
InvalidStateError
and stop.
-
-
Assert this.
[[textureDescriptor]]
is notnull
. -
Let device be this.
[[configuration]]
.device
. -
If this.
[[currentTexture]]
isnull
:-
Replace the drawing buffer of this.
-
Set this.
[[currentTexture]]
to the result of calling device.createTexture()
with this.[[textureDescriptor]]
,except with theGPUTexture
's underlying storage pointing to this.[[drawingBuffer]]
.Note: If the texture can’t be created (e.g. due to validation failure or out-of-memory),this generates and error and returns an invalid
GPUTexture
.Some validation here is redundant with that done inconfigure()
.Implementations must not skip this redundant validation.
-
-
Optionally, queue an automatic expiry task with device device and the following steps:
-
Expire the current texture of this.
Note: If this already happened when updating the rendering of a WebGPU canvas, it has no effect.
-
-
Return this.
[[currentTexture]]
.
Note: The same
GPUTexture
object will be returned by everycall togetCurrentTexture()
until "Expire the current texture"runs, even if thatGPUTexture
is destroyed, failed validation, or failed to allocate. -
To get a copy of the image contents of a context:
Arguments:
-
context: the
GPUCanvasContext
Returns: image contents
-
Ensure that all submitted work items (e.g. queue submissions) havecompleted writing to the image (via context.
[[currentTexture]]
). -
Let snapshot be a copy of context.
[[drawingBuffer]]
. -
Let alphaMode be context.
[[configuration]]
.alphaMode
. -
- If alphaMode is
"opaque"
: -
-
Clear the alpha channel of snapshot to 1.0.
-
Tag snapshot as being opaque.
Note: If the
[[currentTexture]]
, if any, has been destroyed(for example in Replace the drawing buffer), the alpha channel is unobservable,and implementations may clear the alpha channel in-place. -
- Otherwise:
-
Tag snapshot with alphaMode.
- If alphaMode is
-
Return snapshot.
To Replace the drawing buffer of a GPUCanvasContext
context:
-
Expire the current texture of context.
-
Let configuration be context.
[[configuration]]
. -
Set context.
[[drawingBuffer]]
to a transparent black image of the samesize as context.canvas
.-
If configuration is null, the drawing buffer is tagged with the color space
"srgb"
.In this case, the drawing buffer will remain blank until the context is configured. -
If not, the drawing buffer has the specified configuration.
format
and is tagged with the specified configuration.colorSpace
.
Note: configuration.
alphaMode
is ignored until"get a copy of the image contents of a context".Note: This will often be a no-op, if the drawing buffer is already clearedand has the correct configuration.
-
To Expire the current texture of a GPUCanvasContext
context:
-
If context.
[[currentTexture]]
is notnull
:-
Call context.
[[currentTexture]]
.destroy()
(without destroying context.[[drawingBuffer]]
)to terminate write access to the image. -
Set context.
[[currentTexture]]
tonull
.
-
21.3. HTML Specification Hooks
The following algorithms "hook" into algorithms in the HTML specification, and must run at thespecified points.
When the "bitmap" is read from an HTMLCanvasElement
or OffscreenCanvas
with a GPUCanvasContext
context:
-
Return a copy of the image contents of context.
NOTE:
This occurs in many places, including:
-
When an
HTMLCanvasElement
has its rendering updated. -
When an
OffscreenCanvas
with a placeholder canvas element has its rendering updated. -
When
transferToImageBitmap()
creates anImageBitmap
from the bitmap. -
When WebGPU canvas contents are read using other Web APIs, like
drawImage()
,texImage2D()
,texSubImage2D()
,toDataURL()
,toBlob()
, and so on.
If alphaMode
is "opaque"
, this incurs a clear of the alpha channel. Implementations may skip this step when they are able to read or display images in a way that ignores the alpha channel.
If an application needs a canvas only for interop (not presentation), avoid "opaque"
if it is not needed.
When updating the rendering of a WebGPU canvas (an HTMLCanvasElement
or an OffscreenCanvas
with a placeholder canvas element) with a GPUCanvasContext
context, which occurs in the following sub-steps of the event loop processing model:
-
"update the rendering or user interface of that
Document
" -
"update the rendering of that dedicated worker"
Note: Service and Shared workers do not have "update the rendering" steps because they cannot render to user-visible canvases. requestAnimationFrame()
is not exposed in ServiceWorkerGlobalScope
and , and
OffscreenCanvas
es from transferControlToOffscreen()
cannot be sent to these workers.
Run the following steps:
-
Expire the current texture of context.
Note: If this already happened in the task queued by
getCurrentTexture()
, it has no effect.
Note: This does not happen for standalone OffscreenCanvas
es (created by new OffscreenCanvas()
).
When transferToImageBitmap()
is called on a canvas with GPUCanvasContext
context, after creating an ImageBitmap
from the canvas’s bitmap:
-
Replace the drawing buffer of context.
Note: This is equivalent to "moving" the (possibly alpha-cleared) image contents into the ImageBitmap, without a copy.
21.4. GPUCanvasConfiguration
The supported context formats are a set of GPUTextureFormat
s that must besupported when specified as a GPUCanvasConfiguration
.format
regardless of the given GPUCanvasConfiguration
.device
,initially set to: «"bgra8unorm"
, "rgba8unorm"
, "rgba16float"
».
Note: Canvas configuration cannot use srgb
formats like "bgra8unorm-srgb"
.Instead, use the non-srgb
equivalent ("bgra8unorm"
), specify the srgb
format in the viewFormats
, and use createView()
to createa view with an srgb
format.
enum GPUCanvasAlphaMode { "opaque", "premultiplied",};dictionary {
GPUCanvasConfiguration required GPUDevice device;required GPUTextureFormat format; GPUTextureUsageFlags usage = 0x10; // GPUTextureUsage.RENDER_ATTACHMENTsequence <GPUTextureFormat> viewFormats = [];PredefinedColorSpace colorSpace = "srgb"; GPUCanvasAlphaMode alphaMode = "opaque";};
GPUCanvasConfiguration
has the following members:
device
, of type GPUDevice-
The
GPUDevice
that textures returned bygetCurrentTexture()
will becompatible with. format
, of type GPUTextureFormat-
The format that textures returned by
getCurrentTexture()
will have.Must be one of the Supported context formats. usage
, of type GPUTextureUsageFlags, defaulting to0x10
-
The usage that textures returned by
getCurrentTexture()
will have.RENDER_ATTACHMENT
is the default, but is not automatically includedif the usage is explicitly set. Be sure to includeRENDER_ATTACHMENT
when setting a custom usage if you wish to use textures returned bygetCurrentTexture()
as color targets for a render pass. viewFormats
, of type sequence<GPUTextureFormat>, defaulting to[]
-
The formats that views created from textures returned by
getCurrentTexture()
may use. colorSpace
, of type PredefinedColorSpace, defaulting to"srgb"
-
The color space that values written into textures returned by
getCurrentTexture()
should be displayed with. alphaMode
, of type GPUCanvasAlphaMode, defaulting to"opaque"
-
Determines the effect that alpha values will have on the content of textures returned by
getCurrentTexture()
when read, displayed, or used as an image source.
Configure a GPUCanvasContext
to be used with a specific GPUDevice
, using the preferred format for this context:
const canvas= document. createElement( 'canvas' ); const context= canvas. getContext( 'webgpu' ); context. configure({ device: gpuDevice, format: navigator. gpu. getPreferredCanvasFormat(), });
The GPUTextureDescriptor for the canvas and configuration( (HTMLCanvasElement
or OffscreenCanvas
) canvas, GPUCanvasConfiguration
configuration) is a GPUTextureDescriptor
with the following members:
-
size
: [canvas.width, canvas.height, 1]. -
format
: configuration.format
. -
usage
: configuration.usage
. -
viewFormats
: configuration.viewFormats
.
and other members set to their defaults.
canvas.width refers to HTMLCanvasElement
.width
or OffscreenCanvas
.width
. canvas.height refers to HTMLCanvasElement
.height
or OffscreenCanvas
.height
.
21.4.1. Canvas Color Space
During presentation, the color values in the canvas are converted to the colorspace of the screen. Color values are then clamped to the [0, 1]
interval inthe color space of the screen.
NOTE:
For example, suppose that the value (1.035, -0.175, -0.140)
is written to an 'srgb'
canvas.
If this is presented to an sRGB screen, then this will be converted to sRGB (which is a no-op, because the canvas is sRGB), and then will be clamped to the sRGB value (1.0, 0.0, 0.0)
.
If this is presented to a Display P3 screen, then this will be converted to the value (0.948, 0.106, 0.01)
in the Display P3 color space, and no clamping will be needed.
21.4.2. Canvas Context sizing
All canvas configuration is set in configure()
except for the resolutionof the canvas, which is set by the canvas’s width
and height
.
Note: Like WebGL and 2d canvas, resizing a WebGPU canvas loses the current contents of the drawing buffer.In WebGPU, it does so by replacing the drawing buffer.
When an HTMLCanvasElement
or OffscreenCanvas
canvas with a GPUCanvasContext
context has its width
or height
properties modified, update the canvas size:
-
Replace the drawing buffer of context.
-
Let configuration be context.
[[configuration]]
-
If configuration is not
null
:-
Set context.
[[textureDescriptor]]
to the GPUTextureDescriptor for the canvas and configuration(canvas, configuration).
-
Note: This may result in a GPUTextureDescriptor
which exceeds the maxTextureDimension2D
of the device. In this case, validation will fail inside getCurrentTexture()
.
21.5. GPUCanvasAlphaMode
This enum selects how the contents of the canvas will be interpreted when read, when displayed to the screen or used as an image source (in drawImage, toDataURL, etc.)
Below, src
is a value in the canvas texture, and dst
is an image that the canvasis being composited into (e.g. an HTML page rendering, or a 2D canvas).
"opaque"
-
Read RGB as opaque and ignore alpha values.If the content is not already opaque, the alpha channel is cleared to 1.0in "get a copy of the image contents of a context".
"premultiplied"
-
Read RGBA as premultiplied: color values are premultiplied by their alpha value.100% red at 50% alpha is
[0.5, 0, 0, 0.5]
.If out-of-gamut premultiplied RGBA values are output to the canvas, and the canvas is:
- used as an image source
-
Values are preserved, as described in color space conversion.
- displayed to the screen
-
Compositing results are undefined.This is true even if color space conversion would produce in-gamut values beforecompositing, because the intermediate format for compositing is not specified.
22. Errors & Debugging
During the normal course of operation of WebGPU, errors are raised via dispatch error.
After a device is lost (described below), errors are no longer surfaced.At this point, implementations do not need to run validation or error tracking: popErrorScope()
and uncapturederror
stop reporting errors,and the validity of objects on the device becomes unobservable.
Additionally, no errors are generated by the device loss itself.Instead, the GPUDevice
.lost
promise resolves to indicate the device is lost.
22.1. Fatal Errors
enum {
GPUDeviceLostReason ,
"unknown" ,};[
"destroyed" Exposed =(Window ,Worker ),SecureContext ]interface {
GPUDeviceLostInfo readonly attribute GPUDeviceLostReason;
reason readonly attribute DOMString ;};
message partial interface GPUDevice {readonly attribute Promise <GPUDeviceLostInfo> lost;};
GPUDevice
has the following additional attributes:
lost
, of type Promise<GPUDeviceLostInfo>, readonly-
A slot-backed attribute holding a promise which is created with the device, remainspending for the lifetime of the device, then resolves when the device is lost.
Upon initialization, it is set to a new promise.
22.2. GPUError
[Exposed =(Window ,Worker ),SecureContext ]interface GPUError {readonly attribute DOMString message;};
GPUError
is the base interface for all errors surfaced from popErrorScope()
and the uncapturederror
event.
Errors must only be generated for operations that explicitly state the conditions one maybe generated under in their respective algorithms, and the subtype of error that is generated.
No errors are generated after device loss. See .
Note: GPUError
may gain new subtypes in future versions of this spec. Applications should handlethis possibility, using only the error’s message
when possible, and specializing using instanceof
. Use error.constructor.name
when it’s necessary to serialize an error (e.g. intoJSON, for a debug report).
GPUError
has the following attributes:
message
, of type DOMString, readonly-
A human-readable, localizable text message providing information about the error thatoccurred.
Note: This message is generally intended for application developers to debug theirapplications and capture information for debug reports, not to be surfaced to end-users.
Note: User agents should not include potentially machine-parsable details in this message,such as free system memory on
"out-of-memory"
or other details about theconditions under which memory was exhausted.Note: The
message
should follow the best practices for language anddirection information. This includes making use of any future standards which may emergeregarding the reporting of string language and direction metadata.Editorial note: At the time of this writing, no language/direction recommendation is available that providescompatibility and consistency with legacy APIs, but when there is, adopt it formally.
[Exposed =(Window ,Worker ),SecureContext ]interface : GPUError {
GPUValidationError (
constructor DOMString );};
message
GPUValidationError
is a subtype of GPUError
which indicates that an operation did notsatisfy all validation requirements. Validation errors are always indicative of an applicationerror, and is expected to fail the same way across all devices assuming the same [[features]]
and [[limits]]
are in use.
To generate a validation error for GPUDevice
device, run the following steps:
Content timeline steps:
-
Let error be a new
GPUValidationError
with an appropriate error message. -
Dispatch error error to device.
[Exposed =(Window ,Worker ),SecureContext ]interface : GPUError {
GPUOutOfMemoryError (
constructor DOMString );};
message
GPUOutOfMemoryError
is a subtype of GPUError
which indicates that there was not enough freememory to complete the requested operation. The operation may succeed if attempted again with alower memory requirement (like using smaller texture dimensions), or if memory used by otherresources is released first.
To generate an out-of-memory error for GPUDevice
device, run the following steps:
Content timeline steps:
-
Let error be a new
GPUOutOfMemoryError
with an appropriate error message. -
Dispatch error error to device.
[Exposed =(Window ,Worker ),SecureContext ]interface : GPUError {
GPUInternalError (
constructor DOMString );};
message
GPUInternalError
is a subtype of GPUError
which indicates than an operation failed for asystem or implementation-specific reason even when all validation requirements have been satisfied.For example, the operation may exceed the capabilities of the implementation in a way not easilycaptured by the supported limits. The same operation may succeed on other devices or underdifference circ*mstances.
To generate an internal error for GPUDevice
device, run the following steps:
Content timeline steps:
-
Let error be a new
GPUInternalError
with an appropriate error message. -
Dispatch error error to device.
22.3. Error Scopes
A GPU error scope captures GPUError
s that were generated while the GPU error scope was current. Error scopes are used to isolate errors that occur within a setof WebGPU calls, typically for debugging purposes or to make an operation more fault tolerant.
GPU error scope has the following internal slots:
[[errors]]
, of type list<GPUError
>, initially []-
The
GPUError
s, if any, observed while the GPU error scope was current. [[filter]]
, of typeGPUErrorFilter
-
Determines what type of
GPUError
this GPU error scope observes.
enum { "validation", "out-of-memory", "internal",};
GPUErrorFilter partial interface GPUDevice {undefined pushErrorScope(GPUErrorFilter filter);Promise <GPUError?> popErrorScope();};
GPUErrorFilter
defines the type of errors that should be caught when calling pushErrorScope()
:
"validation"
-
Indicates that the error scope will catch a
GPUValidationError
. "out-of-memory"
-
Indicates that the error scope will catch a
GPUOutOfMemoryError
. "internal"
-
Indicates that the error scope will catch a
GPUInternalError
.
GPUDevice
has the following internal slots:
[[errorScopeStack]]
, of type stack<GPU error scope>-
A stack of GPU error scopes that have been pushed to the
GPUDevice
.
The current error scope for a GPUError
error and GPUDevice
device is determined by issuing the following steps to the Device timeline of device:
Device timeline steps:
-
If error is an instance of:
GPUValidationError
-
Let type be "validation".
GPUOutOfMemoryError
-
Let type be "out-of-memory".
GPUInternalError
-
Let type be "internal".
-
Let scope be the last item of device.
[[errorScopeStack]]
. -
While scope is not
undefined
:-
If scope.
[[filter]]
is type, return scope. -
Set scope to the previous item of device.
[[errorScopeStack]]
.
-
-
Return
undefined
.
To dispatch an error GPUError
error on GPUDevice
device, run the following steps on the Device timeline of device:
Device timeline steps:
-
If device is lost, return.
Note: No errors are generated after device loss. See .
-
Let scope be the current error scope for error and device.
-
If scope is not
undefined
:-
Append error to scope.
[[errors]]
. -
Return.
-
-
Otherwise issue the following steps to the Content timeline:
Content timeline steps:
-
If the user agent chooses, queue a global task for GPUDevice device with the following steps:
-
Fire a
GPUUncapturedErrorEvent
named "uncapturederror
" on device, with anerror
of error.
-
Note: If (and only if) there are no uncapturederror
handlers are registered, user agents should surface uncaptured errors to developers, for example as warnings in the browser’s developer console.
Note: The user agent may choose to throttle or limit the number of GPUUncapturedErrorEvent
s that a GPUDevice
can raise to prevent an excessive amount of error handling or logging from impacting performance.
pushErrorScope(filter)
-
Pushes a new GPU error scope onto the
[[errorScopeStack]]
for this.Called on:
GPUDevice
this.Arguments:
Arguments for the GPUDevice.pushErrorScope(filter) method. Parameter Type Nullable Optional Description filter
GPUErrorFilter
✘ ✘ Which class of errors this error scope observes. Returns:
undefined
Content timeline steps:
-
Issue the subsequent steps on the Device timeline of this.
Device timeline steps:
-
Let scope be a new GPU error scope.
-
Set scope.
[[filter]]
to filter. -
Push scope onto this.
[[errorScopeStack]]
.
-
popErrorScope()
-
Pops a GPU error scope off the
[[errorScopeStack]]
for this and resolves to anyGPUError
observed by the error scope, ornull
if none.There is no guarantee of the ordering of promise resolution.
Called on:
GPUDevice
this.Returns:
Promise
<GPUError
?>Content timeline steps:
-
Let contentTimeline be the current Content timeline.
-
Let promise be a new promise.
-
Issue the check steps on the Device timeline of this.
-
Return promise.
Device timeline check steps:
-
If this is lost, issue the following steps on contentTimeline and return:
Content timeline steps:
-
Resolve promise with
null
.
Note: No errors are generated after device loss. See .
-
-
If any of the following requirements are unmet:
-
this.
[[errorScopeStack]]
.size must be > 0.
Then issue the following steps on contentTimeline and return:
Content timeline steps:
-
Reject promise with an
OperationError
.
-
-
Let scope be the result of popping an item off of this.
[[errorScopeStack]]
. -
Let error be any one of the items in scope.
[[errors]]
,ornull
if there are none.For any two errors E1 and E2 in the list, if E2 was caused by E1, E2 shouldnot be the one selected.
Note: For example, if E1 comes from
t
=createTexture()
, andE2 comes fromt
.createView()
becauset
was invalid,E1 should be be preferred since it will be easier for a developer to understandwhat went wrong.Since both of these areGPUValidationError
s, the only difference will be inthemessage
field, which is meant only to be read by humans anyway. -
At an unspecified point now or in the future,issue the subsequent steps on contentTimeline.
Note: By allowing
popErrorScope()
calls to resolve in any order, withany of the errors observed by the scope, this spec allows validation to completeout of order, as long as any state observations are made at the appropriatepoint in adherence to this spec. For example, this allows implementations toperform shader compilation, which depends only on non-stateful inputs, to becompleted on a background thread in parallel with other device-timeline work,and report any resulting errors later.
Content timeline steps:
-
Resolve promise with error.
-
Using error scopes to capture validation errors from a GPUDevice
operation that may fail:
gpuDevice. pushErrorScope( 'validation' ); let sampler= gpuDevice. createSampler({ maxAnisotropy: 0 , // Invalid, maxAnisotropy must be at least 1. }); gpuDevice. popErrorScope(). then(( error) => { if ( error) { // There was an error creating the sampler, so discard it. sampler= null ; console. error( `An error occured while creating sampler: ${ error. message} ` ); } });
NOTE:
Error scopes can encompass as many commands as needed. The number of commands an error scope coverswill generally be correlated to what sort of action the application intends to take in response toan error occuring.
For example: An error scope that only contains the creation of a single resource, such as a textureor buffer, can be used to detect failures such as out of memory conditions, in which case theapplication may try freeing some resources and trying the allocation again.
Error scopes do not identify which command failed, however. So, for instance, wrapping all thecommands executed while loading a model in a single error scope will not offer enough granularity todetermine if the issue was due to memory constraints. As a result freeing resources would usuallynot be a productive response to a failure of that scope. A more appropriate response would be toallow the application to fall back to a different model or produce a warning that the model couldnot be loaded. If responding to memory constraints is desired, the operations allocating memory canalways be wrapped in a smaller nested error scope.
22.4. Telemetry
When a GPUError
is generated that is not observed by any GPU error scope, the user agent may fire an event named uncapturederror
at a GPUDevice
using GPUUncapturedErrorEvent
.
Note: uncapturederror
events are intended to be used for telemetry and reportingunexpected errors. They may not be dispatched for all uncaptured errors (for example, there may be a limit on the number of errors surfaced), and should not be used for handling known error cases that may occur duringnormal operation of an application. Prefer using pushErrorScope()
and popErrorScope()
in those cases.
[Exposed =(Window ,Worker ),SecureContext ]interface :
GPUUncapturedErrorEvent Event {(
constructor DOMString , GPUUncapturedErrorEventInit
type ); [
gpuUncapturedErrorEventInitDict SameObject ]readonly attribute GPUError error;};dictionary :
GPUUncapturedErrorEventInit EventInit {required GPUError;};
error
GPUUncapturedErrorEvent
has the following attributes:
error
, of type GPUError, readonly-
A slot-backed attribute holding an object representing the error that was uncaptured.This has the same type as errors returned by
popErrorScope()
.
partial interface GPUDevice { [Exposed =(Window ,Worker )]attribute EventHandler onuncapturederror;};
GPUDevice
has the following attributes:
onuncapturederror
, of type EventHandler-
An event handler IDL attribute for the
uncapturederror
event type.
Listening for uncaptured errors from a GPUDevice
:
gpuDevice. addEventListener( 'uncapturederror' , ( event) => { // Re-surface the error, because adding an event listener may silence console logs. console. error( 'A WebGPU error was not captured:' , event. error); myEngineDebugReport. uncapturedErrors. push({ type: event. error. constructor . name, message: event. error. message, }); });
23. Detailed Operations
This section describes the details of various GPU operations.
This section is incomplete.
23.1. Transfer
Editorial note: describe the transfers at the high level
23.2. Computing
Computing operations provide direct access to GPU’s programmable hardware.Compute shaders do not have shader stage inputs or outputs, their results areside effects from writing data into storage bindings bound as GPUBufferBindingType."storage"
and GPUStorageTextureBindingLayout
.These operations are encoded within GPUComputePassEncoder
as:
-
dispatchWorkgroups()
-
dispatchWorkgroupsIndirect()
Editorial note: describe the computing algorithm
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
23.3. Rendering
Rendering is done by a set of GPU operations that are executed within GPURenderPassEncoder
,and result in modifications of the texture data, viewed by the render pass attachments.These operations are encoded with:
-
draw()
-
drawIndexed()
, -
drawIndirect()
-
drawIndexedIndirect()
.
Note: rendering is the traditional use of GPUs, and is supported by multiple fixed-functionblocks in hardware.
The main rendering algorithm:
render(descriptor, drawCall, state)
Arguments:
-
descriptor: Description of the current
GPURenderPipeline
. -
drawCall: The draw call parameters.
-
state: RenderState of the
GPURenderCommandsMixin
where the draw call is issued.
-
Resolve indices. See § 23.3.1 Index Resolution.
Let vertexList be the result of resolve indices(drawCall, state).
-
Process vertices. See § 23.3.2 Vertex Processing.
Execute process vertices(vertexList, drawCall, descriptor.
vertex
, state). -
Assemble primitives. See § 23.3.3 Primitive Assembly.
Execute assemble primitives(vertexList, drawCall, descriptor.
primitive
). -
Clip primitives. See § 23.3.4 Primitive Clipping.
Let primitiveList be the result of this stage.
-
Rasterize. See § 23.3.5 Rasterization.
Let rasterizationList be the result of rasterize(primitiveList, state).
-
Process fragments. See § 23.3.6 Fragment Processing.
Gather a list of fragments, resulting from executing process fragment(rasterPoint, descriptor.
fragment
, state)for each rasterPoint in rasterizationList. -
Process depth/stencil.
Editorial note: fill out the section, using fragments
-
Write pixels.
Editorial note: fill out the section
23.3.1. Index Resolution
At the first stage of rendering, the pipeline buildsa list of vertices to process for each instance.
resolve indices(drawCall, state)
Arguments:
-
drawCall: The draw call parameters.
-
state: The snapshot of the
GPURenderCommandsMixin
state at the time of the draw call.
Returns: list of integer indices.
-
Let vertexIndexList be an empty list of indices.
-
If drawCall is an indexed draw call:
-
Initialize the vertexIndexList with drawCall.indexCount integers.
-
For i in range 0 .. drawCall.indexCount (non-inclusive):
-
Let relativeVertexIndex be fetch index(i + drawCall.
firstIndex
, state.[[index_buffer]]
). -
If relativeVertexIndex has the special value
"out of bounds"
,stop and return the empty list.Note: Implementations may choose to display a warning when this occurs,especially when it is easy to detect (like in non-indirect indexed draw calls).
-
Append drawCall.
baseVertex
+ relativeVertexIndex to the vertexIndexList.
-
-
-
Otherwise:
-
Initialize the vertexIndexList with drawCall.vertexCount integers.
-
Set each vertexIndexList item i to the value drawCall.firstVertex + i.
-
-
Return vertexIndexList.
Note: in case of indirect draw calls, the indexCount
, vertexCount
, and other properties of drawCall are read from the indirect buffer instead of the draw command itself.
Editorial note: specify indirect commands better.
fetch index(i, buffer, offset, format)
Arguments:
-
i: Index of a vertex index to fetch.
-
state: The snapshot of the
GPURenderCommandsMixin
state at the time of the draw call.
Returns: unsigned integer or "out of bounds"
-
Let indexSize be defined by the state.
[[index_format]]
:"uint16"
-
2
"uint32"
-
4
-
If state.
[[index_buffer_offset]]
+|i + 1| × indexSize > state.[[index_buffer_size]]
,return the special value"out of bounds"
. -
Interpret the data in state.
[[index_buffer]]
, starting at offset state.[[index_buffer_offset]]
+ i × indexSize,of size indexSize bytes, as an unsigned integer and return it.
23.3.2. Vertex Processing
Vertex processing stage is a programmable stage of the render pipeline thatprocesses the vertex attribute data, and producesclip space positions for § 23.3.4 Primitive Clipping, as well as other data for the § 23.3.6 Fragment Processing.
process vertices(vertexIndexList, drawCall, desc, state)
Arguments:
-
vertexIndexList: List of vertex indices to process (mutable, passed by reference).
-
drawCall: The draw call parameters.
-
desc: The descriptor of type
GPUVertexState
. -
state: The snapshot of the
GPURenderCommandsMixin
state at the time of the draw call.
Each vertex vertexIndex in the vertexIndexList, in each instance of index rawInstanceIndex, is processed independently. The rawInstanceIndex is in range from 0 to drawCall.instanceCount - 1, inclusive. This processing happens in parallel, and any side effects, such as writes into GPUBufferBindingType."storage"
bindings, may happen in any order.
-
Let instanceIndex be rawInstanceIndex + drawCall.firstInstance.
-
For each non-
null
vertexBufferLayout in the list of desc.buffers
:-
Let i be the index of the buffer layout in this list.
-
Let vertexBuffer, vertexBufferOffset, and vertexBufferBindingSize be thebuffer, offset, and size at slot i of state.
[[vertex_buffers]]
. -
Let vertexElementIndex be dependent on vertexBufferLayout.
stepMode
:"vertex"
-
vertexIndex
"instance"
-
instanceIndex
-
For each attributeDesc in vertexBufferLayout.
attributes
:-
Let attributeOffset be vertexBufferOffset + vertexElementIndex * vertexBufferLayout.
arrayStride
+ attributeDesc.offset
. -
Load the attribute data of format attributeDesc.
format
from vertexBuffer starting at offset attributeOffset.The components are loaded in the orderx
,y
,z
,w
from buffer memory.If this results in an out-of-bounds access, the resulting value is determinedaccording to WGSL’s invalid memory reference behavior.
-
Optionally (implementation-defined): If attributeOffset + sizeof(attributeDesc.
format
) > vertexBufferOffset + vertexBufferBindingSize, empty vertexIndexList and stop, cancelling the draw call.Note: This allows implementations to detect out-of-bounds values in the index bufferbefore issuing a draw call, instead of using invalid memory reference behavior.
-
Convert the data into a shader-visible format, according to channel formats rules.
An attribute of type
"snorm8x2"
and byte values of[0x70, 0xD0]
will be converted tovec2<f32>(0.88, -0.38)
in WGSL. -
Adjust the data size to the shader type:
-
if both are scalar, or both are vectors of the same dimensionality, no adjustment is needed.
-
if data is vector but the shader type is scalar, then only the first component is extracted.
-
if both are vectors, and data has a higher dimension, the extra components are dropped.
An attribute of type
"float32x3"
and valuevec3<f32>(1.0, 2.0, 3.0)
will exposed to the shader asvec2<f32>(1.0, 2.0)
if a 2-component vector is expected. -
if the shader type is a vector of higher dimensionality, or the data is a scalar,then the missing components are filled from
vec4<*>(0, 0, 0, 1)
value.An attribute of type
"sint32"
and value5
will be exposed to the shader asvec4<i32>(5, 0, 0, 1)
if a 4-component vector is expected.
-
-
Bind the data to vertex shader inputlocation attributeDesc.
shaderLocation
.
-
-
-
For each
GPUBindGroup
group at index in state.[[bind_groups]]
:-
For each resource
GPUBindingResource
in the bind group:-
Let entry be the corresponding
GPUBindGroupLayoutEntry
for this resource. -
If entry.
visibility
includesVERTEX
:-
Bind the resource to the shader under group index and binding
GPUBindGroupLayoutEntry.binding
.
-
-
-
-
Set the shader builtins:
-
Set the
vertex_index
builtin, if any, to vertexIndex. -
Set the
instance_index
builtin, if any, to instanceIndex.
-
-
Invoke the vertex shader entry point described by desc.
Note: The target platform caches the results of vertex shader invocations.There is no guarantee that any vertexIndex that repeats more than once willresult in multiple invocations. Similarly, there is no guarantee that a single vertexIndex will only be processed once.
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
23.3.3. Primitive Assembly
Primitives are assembled by a fixed-function stage of GPUs.
assemble primitives(vertexIndexList, drawCall, desc)
Arguments:
-
vertexIndexList: List of vertex indices to process.
-
drawCall: The draw call parameters.
-
desc: The descriptor of type
GPUPrimitiveState
.
For each instance, the primitives get assembled from the vertices that have been processed by the shaders, based on the vertexIndexList.
-
First, if the primitive topology is a strip, (which means that desc.
stripIndexFormat
is not undefined)and the drawCall is indexed, the vertexIndexList is split intosub-lists using the maximum value of desc.stripIndexFormat
as a separator.Example: a vertexIndexList with values
[1, 2, 65535, 4, 5, 6]
of type"uint16"
will be split in sub-lists[1, 2]
and[4, 5, 6]
. -
For each of the sub-lists vl, primitive generation is done according to the desc.
topology
:"line-list"
-
Line primitives are composed from (vl.0, vl.1),then (vl.2, vl.3), then (vl.4 to vl.5), etc.Each subsequent primitive takes 2 vertices.
"line-strip"
-
Line primitives are composed from (vl.0, vl.1),then (vl.1, vl.2), then (vl.2, vl.3), etc.Each subsequent primitive takes 1 vertex.
"triangle-list"
-
Triangle primitives are composed from (vl.0, vl.1, vl.2),then (vl.3, vl.4, vl.5), then (vl.6, vl.7, vl.8), etc.Each subsequent primitive takes 3 vertices.
"triangle-strip"
-
Triangle primitives are composed from (vl.0, vl.1, vl.2),then (vl.2, vl.1, vl.3), then (vl.2, vl.3, vl.4),then (vl.4, vl.3, vl.5), etc.Each subsequent primitive takes 1 vertices.
Editorial note: should this be defined more formally?
Any incomplete primitives are dropped.
23.3.4. Primitive Clipping
Vertex shaders have to produce a built-in position (of type vec4<f32>
),which denotes the clip position of a vertex in clip space coordinates.
Primitives are clipped to the clip volume, which, for any clip position p inside a primitive, is defined by the following inequalities:
-
−p.w ≤ p.x ≤ p.w
-
−p.w ≤ p.y ≤ p.w
-
0 ≤ p.z ≤ p.w (depth clipping)
If descriptor.primitive
.unclippedDepth
is true
, depth clipping is not applied: the clip volume is not bounded in the z dimension.
A primitive passes through this stage unchanged if every one of its edgeslie entirely inside the clip volume.If the edges of a primitives intersect the boundary of the clip volume,the intersecting edges are reconnected by new edges that lie along the boundary of the clip volume.For triangular primitives (descriptor.primitive
.topology
is "triangle-list"
or "triangle-strip"
), this reconnectionmay result in introduction of new vertices into the polygon, internally.
If a primitive intersects an edge of the clip volume’s boundary,the clipped polygon must include a point on this boundary edge.
If the vertex shader outputs other floating-point values (scalars and vectors), qualified with"perspective" interpolation, they also get clipped.The output values associated with a vertex that lies within the clip volume are unaffected by clipping.If a primitive is clipped, however, the output values assigned to vertices produced by clipping are clipped.
Considering an edge between vertices a and b that got clipped, resulting in the vertex c,let’s define t to be the ratio between the edge vertices: c.p = t × a.p + (1 − t) × b.p,where x.p is the output clip position of a vertex x.
For each vertex output value "v" with a corresponding fragment input, a.v and b.v would be the outputs for a and b vertices respectively.The clipped shader output c.v is produced based on the interpolation qualifier:
- "flat"
-
Flat interpolation is unaffected, and is based on provoking vertex,which is the first vertex in the primitive. The output value is the samefor the whole primitive, and matches the vertex output of the provoking vertex: c.v = provoking vertex.v
- "linear"
-
The interpolation ratio gets adjusted against the perspective coordinates of the clip positions, so that the result of interpolation is linear in screen space.
Editorial note: provide more specifics here, if possible
- "perspective"
-
The value is linearly interpolated in clip space, producing perspective-correct values:
c.v = t × a.v + (1 − t) × b.v
Editorial note: link to interpolation qualifiers in WGSL
The result of primitive clipping is a new set of primitives, which are containedwithin the clip volume.
23.3.5. Rasterization
Rasterization is the hardware processing stage that maps the generated primitivesto the 2-dimensional rendering area of the framebuffer -the set of render attachments in the current GPURenderPassEncoder
.This rendering area is split into an even grid of pixels.
The framebuffer coordinates start from the top-left corner of the render targets.Each unit corresponds exactly to one pixel. See § 3.3 Coordinate Systems for more information.
Rasterization determines the set of pixels affected by a primitive. In case of multi-sampling,each pixel is further split into descriptor.multisample
.count
samples.The standard sample patterns are as follows,with positions in framebuffer coordinates relative to the top-left corner of the pixel,such that the pixel ranges from (0, 0) to (1, 1):
multisample .count | Sample positions |
---|---|
1 | Sample 0: (0.5, 0.5) |
4 | Sample 0: (0.375, 0.125) Sample 1: (0.875, 0.375) Sample 2: (0.125, 0.625) Sample 3: (0.625, 0.875) |
Let’s define a FragmentDestination to contain:
- position
-
the 2D pixel position using framebuffer coordinates
- sampleIndex
-
an integer in case § 23.3.10 Sample frequency shading is active,or
null
otherwise
We’ll also use a notion of normalized device coordinates, or NDC.In this coordinate system, the viewport bounds range in X and Y from -1 to 1, and in Z from 0 to 1.
Rasterization produces a list of RasterizationPoints, each containing the following data:
- destination
-
refers to FragmentDestination
- coverageMask
-
refers to multisample coverage mask (see § 23.3.11 Sample Masking)
- frontFacing
-
is true if it’s a point on the front face of a primitive
- perspectiveDivisor
-
refers to interpolated 1.0 ÷ W across the primitive
- depth
-
refers to the depth in viewport coordinates,i.e. between the
[[viewport]]
minDepth
andmaxDepth
. - primitiveVertices
-
refers to the list of vertex outputs forming the primitive
- barycentricCoordinates
-
refers to § 23.3.5.3 Barycentric coordinates
Editorial note: define the depth computation algorithm
rasterize(primitiveList, state)
Arguments:
-
primitiveList: List of primitives to rasterize.
-
state: The active RenderState.
Returns: list of RasterizationPoint.
Each primitive in primitiveList is processed independently. However, the order of primitives affects later stages, such as depth/stencil operations and pixel writes.
-
First, the clipped vertices are transformed into NDC - normalized device coordinates.Given the output position p, the NDC position and perspective divisor are:
ndc(p) = vector(p.x ÷ p.w, p.y ÷ p.w, p.z ÷ p.w)
divisor(p) = 1.0 ÷ p.w
-
Let vp be state.
[[viewport]]
.Map the NDC position n into viewport coordinates:-
Compute framebuffer coordinates from the render target offset and size:
framebufferCoords(n) = vector(vp.
x
+ 0.5 × (n.x + 1) × vp.width
, vp.y
+ .5 × (n.y + 1) × vp.height
) -
Compute depth by linearly mapping [0,1] to the viewport depth range:
depth(n) = vp.
minDepth
+ n.z
× ( vp.maxDepth
- vp.minDepth
)
-
-
Let rasterizationPoints be an empty list.
Editorial note: specify that each rasterization point gets assigned an interpolated
divisor(p)
,framebufferCoords(n)
,depth(n)
, as well as the other attributes. -
Proceed with a specific rasterization algorithm,depending on
primitive
.topology
:"point-list"
-
The point, if not filtered by § 23.3.4 Primitive Clipping, goes into § 23.3.5.1 Point Rasterization.
"line-list"
or"line-strip"
-
The line cut by § 23.3.4 Primitive Clipping goes into § 23.3.5.2 Line Rasterization.
"triangle-list"
or"triangle-strip"
-
The polygon produced in § 23.3.4 Primitive Clipping goes into § 23.3.5.4 Polygon Rasterization.
-
Remove all the points rp from rasterizationPoints that have rp.destination.position outside of state.
[[scissorRect]]
. -
Return rasterizationPoints.
23.3.5.1. Point Rasterization
A single FragmentDestination is selected within the pixel containing the framebuffer coordinates of the point.
The coverage mask depends on multi-sampling mode:
- sample-frequency
-
coverageMask = 1 ≪
sampleIndex
- pixel-frequency multi-sampling
-
coverageMask = 1 ≪ descriptor.
multisample
.count
− 1 - no multi-sampling
-
coverageMask = 1
23.3.5.2. Line Rasterization
Editorial note: fill out this section
23.3.5.3. Barycentric coordinates
Barycentric coordinates is a list of n numbers bi,defined for a point p inside a convex polygon with n vertices vi in framebuffer space.Each bi is in range 0 to 1, inclusive, and represents the proximity to vertex vi.Their sum is always constant:
∑ (bi) = 1
These coordinates uniquely specify any point p within the polygon (or on its boundary) as:
p = ∑ (bi × pi)
For a polygon with 3 vertices - a triangle,barycentric coordinates of any point p can be computed as follows:
Apolygon = A(v1, v2, v3) b1 = A(p, b2, b3) ÷ Apolygon b2 = A(b1, p, b3) ÷ Apolygon b3 = A(b1, b2, p) ÷ Apolygon
Where A(list of points) is the area of the polygon with the given set of vertices.
For polygons with more than 3 vertices, the exact algorithm is implementation-dependent.One of the possible implementations is to triangulate the polygon and compute the barycentricsof a point based on the triangle it falls into.
23.3.5.4. Polygon Rasterization
A polygon is front-facing if it’s oriented towards the projection.Otherwise, the polygon is back-facing.
rasterize polygon()
Arguments:
Returns: list of RasterizationPoint.
-
Let rasterizationPoints be an empty list.
-
Let v(i) be the framebuffer coordinates for the clipped vertex number i (starting with 1)in a rasterized polygon of n vertices.
Note: this section uses the term "polygon" instead of a "triangle",since § 23.3.4 Primitive Clipping stage may have introduced additional vertices.This is non-observable by the application.
-
Determine if the polygon is front-facing,which depends on the sign of the area occupied by the polygon in framebuffer coordinates:
area = 0.5 × ((v1.x × vn.y − vn.x × v1.y) + ∑ (vi+1.x × vi.y − vi.x × vi+1.y))
The sign of area is interpreted based on the
primitive
.frontFace
:"ccw"
-
area > 0 is considered front-facing, otherwise back-facing
"cw"
-
area < 0 is considered front-facing, otherwise back-facing
-
Cull based on
primitive
.cullMode
:"none"
-
All polygons pass this test.
"front"
-
The front-facing polygons are discarded,and do not process in later stages of the render pipeline.
"back"
-
The back-facing polygons are discarded.
-
Determine a set of fragments inside the polygon in framebuffer space -these are locations scheduled for the per-fragment operations.This operation is known as "point sampling".The logic is based on descriptor.
multisample
:- disabled
-
Fragments are associated with pixel centers. That is, all the points with coordinates C, wherefract(C) = vector2(0.5, 0.5) in the framebuffer space, enclosed into the polygon, are included.If a pixel center is on the edge of the polygon, whether or not it’s included is not defined.
Note: this becomes a subject of precision for the rasterizer.
- enabled
-
Each pixel is associated with descriptor.
multisample
.count
locations, which are implementation-defined.The locations are ordered, and the list is the same for each pixel of the framebuffer.Each location corresponds to one fragment in the multisampled framebuffer.The rasterizer builds a mask of locations being hit inside each pixel and provides is as "sample-mask"built-in to the fragment shader.
-
For each produced fragment of type FragmentDestination:
-
Let rp be a new RasterizationPoint object
-
Compute the list b as § 23.3.5.3 Barycentric coordinates of that fragment.Set rp.barycentricCoordinates to b.
-
Let di be the depth value of vi.
Editorial note: define how this value is constructed.
-
Set rp.depth to ∑ (bi × di)
-
Append rp to rasterizationPoints.
-
-
Return rasterizationPoints.
23.3.6. Fragment Processing
The fragment processing stage is a programmable stage of the render pipeline thatcomputes the fragment data (often a color) to be written into render targets.
This stage produces a Fragment for each RasterizationPoint:
-
destination refers to FragmentDestination.
-
coverageMask refers to multisample coverage mask (see § 23.3.11 Sample Masking).
-
depth refers to the depth in viewport coordinates,i.e. between the
[[viewport]]
minDepth
andmaxDepth
. -
colors refers to the list of color values,one for each target in
colorAttachments
.
process fragment(rp, desc, state)
Arguments:
-
rp: The RasterizationPoint, produced by § 23.3.5 Rasterization.
-
desc: The descriptor of type
GPUFragmentState
. -
state: The active RenderState.
Returns: Fragment or null
.
-
Let fragment be a new Fragment object.
-
Set fragment.destination to rp.destination.
-
Set fragment.coverageMask to rp.coverageMask.
-
Set fragment.depth to rp.depth.
-
If desc is not
null
:-
Set the shader input builtins. For each non-composite argument of the entry point,annotated as a builtin, set its value based on the annotation:
position
-
vec4<f32>
(rp.destination.position, rp.depth, rp.perspectiveDivisor) front_facing
-
rp.frontFacing
sample_index
-
rp.destination.sampleIndex
sample_mask
-
rp.coverageMask
-
For each user-specified shader stage input of the fragment stage:
-
Let value be the interpolated fragment input,based on rp.barycentricCoordinates, rp.primitiveVertices,and the interpolation qualifier on the input.
Editorial note: describe the exact equations.
-
Set the corresponding fragment shader location input to value.
-
-
Invoke the fragment shader entry point described by desc.
The device may become lost if shader execution does not end in a reasonable amount of time, as determined by the user agent.
-
If the fragment issued
discard
, returnnull
. -
Set fragment.colors to the user-specified shader stage output values from the shader.
-
Take the shader output builtins:
-
If
frag_depth
builtin is produced by the shader as value:-
Let vp be state.
[[viewport]]
. -
Set fragment.depth to clamp(value, vp.
minDepth
, vp.maxDepth
).
-
-
-
If
sample_mask
builtin is produced by the shader as value:-
Set fragment.coverageMask to fragment.coverageMask ∧ value.
-
Otherwise we are in § 23.3.8 No Color Output mode, and fragment.colors is empty.
-
-
Return fragment.
Processing of fragments happens in parallel, while any side effects,such as writes into GPUBufferBindingType."storage"
bindings,may happen in any order.
23.3.7. Output Merging
Editorial note: fill out this section
The depth input to this stage, if any, is clamped to the current [[viewport]]
depth range(regardless of whether the fragment shader stage writes the frag_depth
builtin).
23.3.8. No Color Output
In no-color-output mode, pipeline does not produce any color attachment outputs.
The pipeline still performs rasterization and produces depth valuesbased on the vertex position output. The depth testing and stencil operations can still be used.
23.3.9. Alpha to Coverage
In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of thefragment shader output value at @location(0)
.
The algorithm of producing the extra mask is platform-dependent and can vary for different pixels.It guarantees that:
-
if alpha ≤ 0.0, the result is 0x0
-
if alpha ≥ 1.0, the result is 0xFFFFFFFF
-
if alpha is greater than some other alpha1,then the produced sample mask has at least as many bits set to 1 as the mask for alpha1
23.3.10. Sample frequency shading
Editorial note: fill out the section
23.3.11. Sample Masking
The final sample mask for a pixel is computed as: rasterization mask & mask
& shader-output mask.
Only the lower count
bits of the mask are considered.
If the least-significant bit at position N of the final sample mask has value of "0",the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded.Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.
Note: the color output for sample N is produced by the fragment shader executionwith SV_SampleIndex == N for the current pixel.If the fragment shader doesn’t use this semantics, it’s only executed once per pixel.
The rasterization mask is produced by the rasterization stage,based on the shape of the rasterized polygon. The samples included in the shape get the relevantbits 1 in the mask.
The shader-output mask takes the output value of "sample_mask" builtin in the fragment shader.If the builtin is not output from the fragment shader, and alphaToCoverageEnabled
is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.
24. Type Definitions
typedef [EnforceRange ]unsigned long ;
GPUBufferDynamicOffset typedef [EnforceRange ]unsigned long ;
GPUStencilValue typedef [EnforceRange ]unsigned long ;
GPUSampleMask typedef [EnforceRange ]long ;
GPUDepthBias typedef [EnforceRange ]unsigned long long ;
GPUSize64 typedef [EnforceRange ]unsigned long ;
GPUIntegerCoordinate typedef [EnforceRange ]unsigned long ;
GPUIndex32 typedef [EnforceRange ]unsigned long ;
GPUSize32 typedef [EnforceRange ]long ;
GPUSignedOffset32 typedef unsigned long long ;
GPUSize64Out typedef unsigned long ;
GPUIntegerCoordinateOut typedef unsigned long ;
GPUSize32Out typedef unsigned long ;
GPUFlagsConstant
24.1. Colors & Vectors
dictionary {
GPUColorDict required double r;required double g;required double b;required double a;};typedef (sequence <double >or GPUColorDict);
GPUColor
Note: double
is large enough to precisely hold 32-bit signed/unsignedintegers and single-precision floats.
r
, of type double-
The red channel value.
g
, of type double-
The green channel value.
b
, of type double-
The blue channel value.
a
, of type double-
The alpha channel value.
For a given GPUColor
value color, depending on its type, the syntax:
-
color.r refers toeither
GPUColorDict
.r
or the first item of the sequence (asserting there is such an item). -
color.g refers toeither
GPUColorDict
.g
or the second item of the sequence (asserting there is such an item). -
color.b refers toeither
GPUColorDict
.b
or the third item of the sequence (asserting there is such an item). -
color.a refers toeither
GPUColorDict
.a
or the fourth item of the sequence (asserting there is such an item).
validate GPUColor shape(color)
Arguments:
-
color: The
GPUColor
to validate.
Returns: undefined
-
Throw a
TypeError
if color is a sequence and color.length ≠ 4.
dictionary { GPUIntegerCoordinate
GPUOrigin2DDict = 0; GPUIntegerCoordinate
x = 0;};
y typedef (sequence <GPUIntegerCoordinate>or GPUOrigin2DDict);
GPUOrigin2D
For a given GPUOrigin2D
value origin, depending on its type, the syntax:
-
origin.x refers toeither
GPUOrigin2DDict
.x
or the first item of the sequence (0 if not present). -
origin.y refers toeither
GPUOrigin2DDict
.y
or the second item of the sequence (0 if not present).
validate GPUOrigin2D shape(origin)
Arguments:
-
origin: The
GPUOrigin2D
to validate.
Returns: undefined
-
Throw a
TypeError
if origin is a sequence and origin.length > 2.
dictionary { GPUIntegerCoordinate
GPUOrigin3DDict = 0; GPUIntegerCoordinate
x = 0; GPUIntegerCoordinate
y = 0;};
z typedef (sequence <GPUIntegerCoordinate>or GPUOrigin3DDict);
GPUOrigin3D
For a given GPUOrigin3D
value origin, depending on its type, the syntax:
-
origin.x refers toeither
GPUOrigin3DDict
.x
or the first item of the sequence (0 if not present). -
origin.y refers toeither
GPUOrigin3DDict
.y
or the second item of the sequence (0 if not present). -
origin.z refers toeither
GPUOrigin3DDict
.z
or the third item of the sequence (0 if not present).
validate GPUOrigin3D shape(origin)
Arguments:
-
origin: The
GPUOrigin3D
to validate.
Returns: undefined
-
Throw a
TypeError
if origin is a sequence and origin.length > 3.
dictionary {
GPUExtent3DDict required GPUIntegerCoordinate width; GPUIntegerCoordinate height = 1; GPUIntegerCoordinate depthOrArrayLayers = 1;};typedef (sequence <GPUIntegerCoordinate>or GPUExtent3DDict);
GPUExtent3D
width
, of type GPUIntegerCoordinate-
The width of the extent.
height
, of type GPUIntegerCoordinate, defaulting to1
-
The height of the extent.
depthOrArrayLayers
, of type GPUIntegerCoordinate, defaulting to1
-
The depth of the extent or the number of array layers it contains.If used with a
GPUTexture
with aGPUTextureDimension
of"3d"
defines the depth of the texture. If used with aGPUTexture
with aGPUTextureDimension
of"2d"
defines the number of array layers in the texture.
For a given GPUExtent3D
value extent, depending on its type, the syntax:
-
extent.width refers toeither
GPUExtent3DDict
.width
or the first item of the sequence (asserting there is such an item). -
extent.height refers toeither
GPUExtent3DDict
.height
or the second item of the sequence (1 if not present). -
extent.depthOrArrayLayers refers toeither
GPUExtent3DDict
.depthOrArrayLayers
or the third item of the sequence (1 if not present).
validate GPUExtent3D shape(extent)
Arguments:
-
extent: The
GPUExtent3D
to validate.
Returns: undefined
-
Throw a
TypeError
if:
-
extent is a sequence, and
-
extent.length < 1 or extent.length > 3.
25. Feature Index
25.1. "depth-clip-control"
Allows depth clipping to be disabled.
This feature adds the following optional API surfaces:
-
New
GPUPrimitiveState
dictionary members:-
unclippedDepth
-
25.2. "depth32float-stencil8"
Allows for explicit creation of textures of format "depth32float-stencil8"
.
This feature adds the following optional API surfaces:
-
New
GPUTextureFormat
enum values:-
"depth32float-stencil8"
-
25.3. "texture-compression-bc"
Allows for explicit creation of textures of BC compressed formats.
This feature adds the following optional API surfaces:
-
New
GPUTextureFormat
enum values:-
"bc1-rgba-unorm"
-
"bc1-rgba-unorm-srgb"
-
"bc2-rgba-unorm"
-
"bc2-rgba-unorm-srgb"
-
"bc3-rgba-unorm"
-
"bc3-rgba-unorm-srgb"
-
"bc4-r-unorm"
-
"bc4-r-snorm"
-
"bc5-rg-unorm"
-
"bc5-rg-snorm"
-
"bc6h-rgb-ufloat"
-
"bc6h-rgb-float"
-
"bc7-rgba-unorm"
-
"bc7-rgba-unorm-srgb"
-
25.4. "texture-compression-etc2"
Allows for explicit creation of textures of ETC2 compressed formats.
This feature adds the following optional API surfaces:
-
New
GPUTextureFormat
enum values:-
"etc2-rgb8unorm"
-
"etc2-rgb8unorm-srgb"
-
"etc2-rgb8a1unorm"
-
"etc2-rgb8a1unorm-srgb"
-
"etc2-rgba8unorm"
-
"etc2-rgba8unorm-srgb"
-
"eac-r11unorm"
-
"eac-r11snorm"
-
"eac-rg11unorm"
-
"eac-rg11snorm"
-
25.5. "texture-compression-astc"
Allows for explicit creation of textures of ASTC compressed formats.
This feature adds the following optional API surfaces:
-
New
GPUTextureFormat
enum values:-
"astc-4x4-unorm"
-
"astc-4x4-unorm-srgb"
-
"astc-5x4-unorm"
-
"astc-5x4-unorm-srgb"
-
"astc-5x5-unorm"
-
"astc-5x5-unorm-srgb"
-
"astc-6x5-unorm"
-
"astc-6x5-unorm-srgb"
-
"astc-6x6-unorm"
-
"astc-6x6-unorm-srgb"
-
"astc-8x5-unorm"
-
"astc-8x5-unorm-srgb"
-
"astc-8x6-unorm"
-
"astc-8x6-unorm-srgb"
-
"astc-8x8-unorm"
-
"astc-8x8-unorm-srgb"
-
"astc-10x5-unorm"
-
"astc-10x5-unorm-srgb"
-
"astc-10x6-unorm"
-
"astc-10x6-unorm-srgb"
-
"astc-10x8-unorm"
-
"astc-10x8-unorm-srgb"
-
"astc-10x10-unorm"
-
"astc-10x10-unorm-srgb"
-
"astc-12x10-unorm"
-
"astc-12x10-unorm-srgb"
-
"astc-12x12-unorm"
-
"astc-12x12-unorm-srgb"
-
25.6. "timestamp-query"
Adds the ability to query timestamps from GPU command buffers. See § 20.4 Timestamp Query.
This feature adds the following optional API surfaces:
-
New
GPUQueryType
values:-
"timestamp"
-
-
New
GPUComputePassDescriptor
members:-
timestampWrites
-
-
New
GPURenderPassDescriptor
members:-
timestampWrites
-
25.7. "indirect-first-instance"
Allows the use of non-zero firstInstance
values in indirect draw parameters and indirect drawIndexed parameters.
This feature adds no optional API surfaces.
25.8. "shader-f16"
Allows the use of the half-precision floating-point type f16 in WGSL.
This feature adds the following optional API surfaces:
-
New WGSL extensions:
25.9. "rg11b10ufloat-renderable"
Allows the RENDER_ATTACHMENT
usage on textures with format "rg11b10ufloat"
,and also allows textures of that format to be blended and multisampled.
This feature adds no optional API surfaces.
25.10. "bgra8unorm-storage"
Allows the STORAGE_BINDING
usage on textures with format "bgra8unorm"
.
This feature adds no optional API surfaces.
25.11. "float32-filterable"
Makes textures with formats "r32float"
, "rg32float"
, and "rgba32float"
filterable.
26. Appendices
26.1. Texture Format Capabilities
26.1.1. Plain color formats
All plain color formats support COPY_SRC
, COPY_DST
, and TEXTURE_BINDING
usage.
The RENDER_ATTACHMENT
and STORAGE_BINDING
columnsspecify support for GPUTextureUsage.RENDER_ATTACHMENT
and GPUTextureUsage.STORAGE_BINDING
usage respectively.
The render target pixel byte cost and render target component alignment are used to validate the maxColorAttachmentBytesPerSample
limit.
Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.
Format | GPUTextureSampleType | RENDER_ATTACHMENT | blendable | multisampling | resolve | "write-only" or "read-only" STORAGE_BINDING | "read-write" STORAGE_BINDING | Texel block copy footprint (Bytes) | Render target pixel byte cost (Bytes) |
---|---|---|---|---|---|---|---|---|---|
8 bits per component (1-byte render target component alignment) | |||||||||
r8unorm | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 1 | |||
r8snorm | "float" ,"unfilterable-float" | 1 | – | ||||||
r8uint | "uint" | ✓ | ✓ | 1 | |||||
r8sint | "sint" | ✓ | ✓ | 1 | |||||
rg8unorm | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 2 | |||
rg8snorm | "float" ,"unfilterable-float" | 2 | – | ||||||
rg8uint | "uint" | ✓ | ✓ | 2 | |||||
rg8sint | "sint" | ✓ | ✓ | 2 | |||||
rgba8unorm | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | ✓ | 4 | 8 | |
rgba8unorm-srgb | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 4 | 8 | ||
rgba8snorm | "float" ,"unfilterable-float" | ✓ | 4 | – | |||||
rgba8uint | "uint" | ✓ | ✓ | ✓ | 4 | ||||
rgba8sint | "sint" | ✓ | ✓ | ✓ | 4 | ||||
bgra8unorm | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | If "bgra8unorm-storage" is enabled | 4 | 8 | |
bgra8unorm-srgb | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 4 | 8 | ||
16 bits per component (2-byte render target component alignment) | |||||||||
r16uint | "uint" | ✓ | ✓ | 2 | |||||
r16sint | "sint" | ✓ | ✓ | 2 | |||||
r16float | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 2 | |||
rg16uint | "uint" | ✓ | ✓ | 4 | |||||
rg16sint | "sint" | ✓ | ✓ | 4 | |||||
rg16float | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 4 | |||
rgba16uint | "uint" | ✓ | ✓ | ✓ | 8 | ||||
rgba16sint | "sint" | ✓ | ✓ | ✓ | 8 | ||||
rgba16float | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | ✓ | 8 | ||
32 bits per component (4-byte render target component alignment) | |||||||||
r32uint | "uint" | ✓ | ✓ | ✓ | 4 | ||||
r32sint | "sint" | ✓ | ✓ | ✓ | 4 | ||||
r32float | "unfilterable-float"
| ✓ | ✓ | ✓ | ✓ | 4 | |||
rg32uint | "uint" | ✓ | ✓ | 8 | |||||
rg32sint | "sint" | ✓ | ✓ | 8 | |||||
rg32float | "unfilterable-float"
| ✓ | ✓ | 8 | |||||
rgba32uint | "uint" | ✓ | ✓ | 16 | |||||
rgba32sint | "sint" | ✓ | ✓ | 16 | |||||
rgba32float | "unfilterable-float"
| ✓ | ✓ | 16 | |||||
mixed component width, 32 bits per texel (4-byte render target component alignment) | |||||||||
rgb10a2uint | "uint" | ✓ | ✓ | 4 | 8 | ||||
rgb10a2unorm | "float" ,"unfilterable-float" | ✓ | ✓ | ✓ | ✓ | 4 | 8 | ||
rg11b10ufloat | "float" ,"unfilterable-float" | If "rg11b10ufloat-renderable" is enabled | 4 | 8 |
26.1.2. Depth-stencil formats
A depth-or-stencil format is any format with depth and/or stencil aspects.A combined depth-stencil format is a depth-or-stencil format that has bothdepth and stencil aspects.
All depth-or-stencil formats support the COPY_SRC
, COPY_DST
, TEXTURE_BINDING
, and RENDER_ATTACHMENT
usages.All of these formats support multisampling.However, certain copy operations also restrict the source and destination formats.
Depth textures cannot be used with "filtering"
samplers, but can alwaysbe used with "comparison"
samplers even if they use filtering.
Format | NOTE: Texel block memory cost (Bytes) | Aspect | GPUTextureSampleType | Valid image copy source | Valid image copy destination | Texel block copy footprint (Bytes) | Aspect-specific format |
---|---|---|---|---|---|---|---|
stencil8 | 1 − 4 | stencil | "uint" | ✓ | 1 | stencil8 | |
depth16unorm | 2 | depth | "depth" , "unfilterable-float" | ✓ | 2 | depth16unorm | |
depth24plus | 4 | depth | "depth" , "unfilterable-float" | ✗ | – | depth24plus | |
depth24plus-stencil8 | 4 − 8 | depth | "depth" , "unfilterable-float" | ✗ | – | depth24plus | |
stencil | "uint" | ✓ | 1 | stencil8 | |||
depth32float | 4 | depth | "depth" , "unfilterable-float" | ✓ | ✗ | 4 | depth32float |
depth32float-stencil8 | 5 − 8 | depth | "depth" , "unfilterable-float" | ✓ | ✗ | 4 | depth32float |
stencil | "uint" | ✓ | 1 | stencil8 |
24-bit depth refers to a 24-bit unsigned normalized depth format with a range from0.0 to 1.0, which would be spelled "depth24unorm" if exposed.
26.1.2.1. Reading and Sampling Depth/Stencil Textures
It is possible to bind a depth-aspect GPUTextureView
to either a texture_depth_*
binding or a binding with other non-depth 2d/cube texture types.
A stencil-aspect GPUTextureView
must be bound to a normal texture binding type.The sampleType
in the GPUBindGroupLayout
must be "uint"
.
Reading or sampling the depth or stencil aspect of a texture behaves as if the texture containsthe values (V, X, X, X)
, where V is the actual depth or stencil value,and each X is an implementation-defined unspecified value.
For depth-aspect bindings, the unspecified values are not visible through bindings with texture_depth_*
types.
If a depth texture is bound to tex
with type texture_2d<f32>
:
-
textureSample(tex, ...)
will returnvec4<f32>(D, X, X, X)
. -
textureGather(0, tex, ...)
will returnvec4<f32>(D1, D2, D3, D4)
. -
textureGather(2, tex, ...)
will returnvec4<f32>(X1, X2, X3, X4)
(a completely unspecified value).
Note: Short of adding a new more constrained stencil sampler type (like depth), it’s infeasible forimplementations to efficiently paper over the driver differences for depth/stencil reads.As this was not a portability pain point for WebGL, it’s not expected to be problematic in WebGPU.In practice, expect either (V, V, V, V)
or (V, 0, 0, 1)
(where V
is the depth or stencilvalue), depending on hardware.
26.1.2.2. Copying Depth/Stencil Textures
The depth aspects of depth32float formats("depth32float"
and "depth32float-stencil8"
have a limited range.As a result, copies into such textures are only valid from other textures of the same format.
The depth aspects of depth24plus formats("depth24plus"
and "depth24plus-stencil8"
)have opaque representations (implemented as either 24-bit depth or "depth32float"
).As a result, depth-aspect image copies are not allowed with these formats.
NOTE:
It is possible to imitate these disallowed copies:
-
All of these formats can be written in a render pass using a fragment shader that outputsdepth values via the
frag_depth
output. -
Textures with "depth24plus" formats can be read as shader textures, andwritten to a texture (as a render pass attachment) orbuffer (via a storage buffer binding in a compute shader).
26.1.3. Packed formats
All packed texture formats support COPY_SRC
, COPY_DST
,and TEXTURE_BINDING
usages.All of these formats are filterable.None of these formats are renderable or support multisampling.
A compressed format is any format with a block size greater than 1×1.
Note: The texel block memory cost of each of these formats is the same as its texel block copy footprint.
Format | Texel block copy footprint (Bytes) | GPUTextureSampleType | Texel block width/height | Feature |
---|---|---|---|---|
rgb9e5ufloat | 4 | "float" ,"unfilterable-float" | 1 × 1 | |
bc1-rgba-unorm | 8 | "float" ,"unfilterable-float" | 4 × 4 | texture-compression-bc |
bc1-rgba-unorm-srgb | ||||
bc2-rgba-unorm | 16 | |||
bc2-rgba-unorm-srgb | ||||
bc3-rgba-unorm | 16 | |||
bc3-rgba-unorm-srgb | ||||
bc4-r-unorm | 8 | |||
bc4-r-snorm | ||||
bc5-rg-unorm | 16 | |||
bc5-rg-snorm | ||||
bc6h-rgb-ufloat | 16 | |||
bc6h-rgb-float | ||||
bc7-rgba-unorm | 16 | |||
bc7-rgba-unorm-srgb | ||||
etc2-rgb8unorm | 8 | "float" ,"unfilterable-float" | 4 × 4 | texture-compression-etc2 |
etc2-rgb8unorm-srgb | ||||
etc2-rgb8a1unorm | 8 | |||
etc2-rgb8a1unorm-srgb | ||||
etc2-rgba8unorm | 16 | |||
etc2-rgba8unorm-srgb | ||||
eac-r11unorm | 8 | |||
eac-r11snorm | ||||
eac-rg11unorm | 16 | |||
eac-rg11snorm | ||||
astc-4x4-unorm | 16 | "float" ,"unfilterable-float" | 4 × 4 | texture-compression-astc |
astc-4x4-unorm-srgb | ||||
astc-5x4-unorm | 16 | 5 × 4 | ||
astc-5x4-unorm-srgb | ||||
astc-5x5-unorm | 16 | 5 × 5 | ||
astc-5x5-unorm-srgb | ||||
astc-6x5-unorm | 16 | 6 × 5 | ||
astc-6x5-unorm-srgb | ||||
astc-6x6-unorm | 16 | 6 × 6 | ||
astc-6x6-unorm-srgb | ||||
astc-8x5-unorm | 16 | 8 × 5 | ||
astc-8x5-unorm-srgb | ||||
astc-8x6-unorm | 16 | 8 × 6 | ||
astc-8x6-unorm-srgb | ||||
astc-8x8-unorm | 16 | 8 × 8 | ||
astc-8x8-unorm-srgb | ||||
astc-10x5-unorm | 16 | 10 × 5 | ||
astc-10x5-unorm-srgb | ||||
astc-10x6-unorm | 16 | 10 × 6 | ||
astc-10x6-unorm-srgb | ||||
astc-10x8-unorm | 16 | 10 × 8 | ||
astc-10x8-unorm-srgb | ||||
astc-10x10-unorm | 16 | 10 × 10 | ||
astc-10x10-unorm-srgb | ||||
astc-12x10-unorm | 16 | 12 × 10 | ||
astc-12x10-unorm-srgb | ||||
astc-12x12-unorm | 16 | 12 × 12 | ||
astc-12x12-unorm-srgb |