All remaining uses of that constructor would also use at_end(),
and vice-versa. So just implement that behavior in the constructor
itself.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33815>
Memory fences do not refer to an element of a binding table. Rather,
the reason we had "BTI" in these opcodes was to distinguish what in
modern terms are called UGM (untyped memory data cache) vs. SLM
(cross-thread shared local memory) fences.
Icelake and older platforms used the "data cache" SFID for both
purposes, distinguishing them by having a special binding table
index, 254, meaning "this is actually SLM access". This is where
the notion that fences had BTIs came in. (In fact, prior to Icelake,
separate SLM fences were not a thing, so BTI wasn't used there either.)
To avoid confusion about BTI being involved, we choose a simpler lie: we
have Icelake SLM fences target GFX12_SFID_SLM (like modern platforms
would), even though it didn't really exist back then. Later lowering
code sets it back to the correct Data Cache SFID with magic SLM binding
table index. This eliminates BTI everywhere and an unnecessary source.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33297>
brw_memory_fence() overrides the instructions generated by the
MEMORY_FENCE or INTERLOCK opcodes to be force_writemask_all with
exec_size == 1. But the IR was emitting it in SIMD8 (regardless
of dispatch width). Instead, just emit the IR as SIMD1/NoMask so
the IR matches what we actually generate. Have size_written indicate
that the entire destination is written, however, as it is ultimately
going to be a SEND that writes a whole register.
We were also using a UD register for the source of
FS_OPCODE_SCHEDULING_FENCE when the generator overrides it to UW,
so just specify UW in the IR as well so that they line up.
Also add validation for MEMORY_FENCE/INTERLOCK that we've done the
exec_size and masking right in the IR.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33297>
This will be useful for pulling constants in device bound shaders. A64
allows us to put the constants anywhere.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32895>
Now using the same model as the compute shader.
As a result we temporarily disable the use of the Inline register for
providing push constants on Task & Mesh shaders. Since that register
is also available on the compute shader we'll try to find a way to use
the same mechanism for all 3 shaders in another MR and bring back that
optimization.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32895>
This is similar to what we used to do on pre-SNB platforms, the number
of GRF registers used by the shader will be used on Xe3+ to adjust the
trade-off between thread-level parallelism and size of the GRF file.
Plumb the value through prog_data so the driver can set up the
hardware state accordingly.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32664>
This is similar in principle to the previous commit "intel/brw/xe3+:
brw_compile_fs() implementation for Xe3+." but applied to compute-like
shader stages. It changes the implementation of brw_compile_cs/task/mesh()
to reduce compile time and take advantage of wider dispatch modes more
aggressively than the original logic, since as of Xe3 SIMD32 builds
succeed without spills in most cases thanks to VRT.
The new "optimistic" SIMD selection logic starts with the SIMD width
that is potentially highest performance and only compiles additional
narrower variants if that fails (typically due to spilling), while the
old "pessimistic" logic did the opposite: It started with the
narrowest SIMD width and compiled additional variants with increasing
register pressure until one of them failed to compile.
In typical non-spilling cases where we formerly compiled SIMD16 and
SIMD32 variants of the same compute shader, this change will halve the
number of backend compilations required to build it.
XXX - Possibly don't do this in cases with variable workgroup size
until effect on runtime performance can be measured directly.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
v2: Don't do this for now in cases with variable workgroup size, still
compile every possible variant in such cases.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32664>
Instead the complicated logic we currently have, do this :
We start with this shader :
int main() {
...
if (...) {
SetMeshOutputsEXT(0, 0);
return;
} else {
SetMeshOutputsEXT(...);
}
...
}
We turn it into this :
int main() {
uint __temp_prim_count = 0;
...
if (...) {
__temp_prim_count = 0;
return;
} else {
__temp_prim_count = ...;
}
...
if (is_first_group_lane()) {
SetMeshOutputsEXT(..., __temp_prim_count);
}
}
This works because the SPIRV spec says this :
"The arguments are taken from the first invocation in each
workgroup. Any invocation must execute this instruction no more
than once and under uniform control flow."
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12388
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33038>
Disable mesh autostrip for platforms that need WA 16020916187.
Additionally, zero out the layer and viewport slots when a shading rate
is found through the brw_nir_initialize_mue pass.
Signed-off-by: Rohan Garg <rohan.garg@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32751>
The InlineData passed to the shader is a fixed size unrelated to the
register size. It happens to match pre-Xe2, but by considering it the
same in Xe2, we ended up reading pushed constants from the wrong place
when they didn't fit in the InlineData.
Fixes: 97b17aa0b1 ("brw/nir: rework inline_data_intel to work with compute")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31856>
This intrinsic was initially dedicated to mesh/task shaders, but the
mechanism it exposes also exists in the compute shaders on Gfx12.5+.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31508>
In SIMD32, the fence instruction is currently going to read grf0-3
leading to such assertions in the backend :
../src/intel/compiler/brw_fs_reg_allocate.cpp:206:
void fs_visitor::calculate_payload_ranges(bool, unsigned int, int*) const:
Assertion `j < payload_node_count' failed.
The reason we haven't seen the problem yet is that there always enough
payload register to accomodate this. But the following change is going
to make the inline parameter register optional.
Since SHADER_OPCODE_MEMORY_FENCE is emitted in the generator as SIMD1
NoMask (see brw_memory_fence), we can limit ourselves to SIMD1
exec_all() in the IR as well so that the IR accounts for grf0 as a
source.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: mesa-stable
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31508>