Borderlands 3 (both DX11 and DX12 renderers) have a common pattern
across many shaders:
con 32x4 %510 = (uint32)txf %2 (handle), %1191 (0x10) (coord), %1 (0x0) (lod), 0 (texture)
con 32x4 %512 = (uint32)txf %2 (handle), %1511 (0x11) (coord), %1 (0x0) (lod), 0 (texture)
...
con 32x4 %550 = (uint32)txf %2 (handle), %1549 (0x25) (coord), %1 (0x0) (lod), 0 (texture)
con 32x4 %552 = (uint32)txf %2 (handle), %1551 (0x26) (coord), %1 (0x0) (lod), 0 (texture)
A single basic block contains piles of texelFetches from a 1D buffer
texture, with constant coordinates. In most cases, only the .x channel
of the result is read. So we have something on the order of 28 sampler
messages, each asking for...a single uint32_t scalar value. Because our
sampler doesn't have any support for convergent block loads (like the
untyped LSC transpose messages for SSBOs)...this means we were emitting
SIMD8/16 (or SIMD16/32 on Xe2) sampler messages for every single scalar,
replicating what's effectively a SIMD1 value to the entire register.
This is hugely wasteful, both in terms of register pressure, and also in
back-and-forth sending and receiving memory messages.
The good news is we can take advantage of our explicit SIMD model to
handle this more efficiently. This patch adds a new optimization pass
that detects a series of SHADER_OPCODE_TXF_LOGICAL, in the same basic
block, with constant offsets, from the same texture. It constructs a
new divergent coordinate where each channel is one of the constants
(i.e <10, 11, 12, ..., 26> in the above example). It issues a new
NoMask divergent texel fetch which loads N useful channels in one go,
and replaces the rest with expansion MOVs that splat the SIMD1 result
back to the full SIMD width. (These get copy propagated away.)
We can pick the SIMD size of the load independently of the native shader
width as well. On Xe2, those 28 convergent loads become a single SIMD32
ld message. On earlier hardware, we use 2 SIMD16 messages. Or we can
use a smaller size when there aren't many to combine.
In fossil-db, this cuts 27% of send messages in affected shaders, 3-6%
of cycles, 2-3% of instructions, and 8-12% of live registers. On A770,
this improves performance of Borderlands 3 by roughly 2.5-3.5%.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32573>
In panvk we pass absolute view indices to the hardware, so we need to do
the conversion from compacted to absolute at some point. Emitting
absolute indices from nir_lower_multiview initially looks like the
simplest option, but nir_lower_io_to_temporaries will emit a write for
every element of array varyings. This results in unnecessary writes to
disabled views.
Signed-off-by: Benjamin Lee <benjamin.lee@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31704>
This is needed for implementing multiview in panvk, where the address
calculation for multiview outputs is not well-represented by lowering to
nir_intrinsic_store_output with a single offset.
The case where a variable is both per-view and per-{vertex,primitive} is
now unsupported. This would come up with drivers implementing
NV_mesh_shader or using nir_lower_multiview on geometry, tessellation,
or mesh shaders. No drivers currently do either of these. There was some
code that attempted to handle the nested per-view case by unwrapping
per-view/arrayed types twice, but it's unclear to what extent this
actually worked.
ANV and Turnip both rely on per-view outputs being assigned a unique
driver location for each view, so I've added on option to configure that
behavior rather than removing it.
Signed-off-by: Benjamin Lee <benjamin.lee@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31704>
The last argument seems to be used as brw_shader_reloc::delta (from
brw_add_reloc), and we're unconditionally setting it to 0 here, while
the other place where we handle nir_intrinsic_load_reloc_const_intel
seems to be setting the base appropriately.
I found this by inspection while debugging a bug related to this code,
so I'm not aware of any workloads that get improved by this patch.
Related patches:
- ecbec25e84 ("intel/nir: add reloc delta to load_reloc_const_intel intrinsic")
- 99047451c9 ("intel/fs: add plumbing for embedded samplers")
Fixes: ecbec25e84 ("intel/nir: add reloc delta to load_reloc_const_intel intrinsic")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32531>
Clang is capable of tacking what headers it imports, as long as we set
it up to do that. While that isn't important for rusticl, it would be
useful for the various `_clc` tools, as they can then tell Ninja which
headers they read to make rebuilds more reliable.
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Dylan Baker <dylan.c.baker@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32505>
This is the selector, and it must always be a uniform UD, so there's no
reason to not propagate into it.
No shader-db change on any Intel platform.
fossil-db:
All Intel platforms had similar results. (Lunar Lake shown)
Totals:
Instrs: 220507131 -> 220507127 (-0.00%)
Cycle count: 31607052398 -> 31607053364 (+0.00%); split: -0.00%, +0.00%
Totals from 5 (0.00% of 702410) affected shaders:
Instrs: 995 -> 991 (-0.40%)
Cycle count: 86392 -> 87358 (+1.12%); split: -0.07%, +1.19%
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32097>
The next two commits modify the destination regioning in a way that,
which still correct, trigger assertion failures if we try to fix the
regioning here.
Broadcast gets lowered in brw_eu_emit. For the purposes of region
restrictions, let's assume that the final code emission will do the
right thing. Doing a bunch of shuffling here is only going to make a
mess of things.
No shader-db or fossil-db changes on any Intel platform.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32097>
We were wanting to check if the destination region spanned multiple
registers. But we were checking against REG_SIZE, when the register
size is actually REG_SIZE * reg_unit(devinfo) now.
This meant that SIMD32 LOAD_PAYLOAD was always getting SIMD-split
on Xe2 platforms, generating a lot of unnecessary mess for compute
shaders.
fossil-db results on Lunar Lake:
Totals:
Instrs: 146178614 -> 143291988 (-1.97%); split: -1.98%, +0.00%
Subgroup size: 11089632 -> 11089376 (-0.00%); split: +0.00%, -0.00%
Cycle count: 22528892444 -> 22507551650 (-0.09%); split: -0.12%, +0.03%
Max live registers: 48834202 -> 48886685 (+0.11%); split: -0.09%, +0.20%
Totals from 134306 (24.10% of 557327) affected shaders:
Instrs: 28806335 -> 25919709 (-10.02%); split: -10.02%, +0.00%
Subgroup size: 4297680 -> 4297424 (-0.01%); split: +0.00%, -0.01%
Cycle count: 956867650 -> 935526856 (-2.23%); split: -2.84%, +0.61%
Max live registers: 13085711 -> 13138194 (+0.40%); split: -0.33%, +0.73%
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32471>
This is a prerequisite for enabling nir_opt_varyings for all gallium
drivers.
nir_lower_io_passes (called by the GLSL linker) only uses NIR options
to lower indirect IO access before lowering IO and calling
nir_opt_varyings.
Most drivers report full support for indirect IO and lower it themselves,
which prevents compaction of lowered indirectly accessed varyings because
nir_opt_varyings doesn't touch indirect varyings.
Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> (Rb for asahi)
Reviewed-by: Pavel Ondračka <pavel.ondracka@gmail.com> (for r300)
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32423>
Notably, our convergent block loads were already overfetching - we
rounded up to block sizes of 8, 16, 32, or 64(LSC-only). But we did
so in the backend, rather than NIR.
With recent changes, nir_opt_load_store_vectorizer allows holes of up
to 28 bytes (7 components at 4 bytes each). This allows us to detect
cases where we did a convergent block load for 1 component (but loaded
a whole vec8), then another load for the next vec8, and combine them
into a single V16 load. Single component loads aren't the most common,
but convergent loads of a vec2 in one group and a vec3 in another are
quite common, and it makes no sense to do V8+V8 loads instead of V16.
For non-block loads, we allow a max hole of 4 bytes. This allows the
common case of XYZ_ + XYZ_ loads (where the last component is unread)
to combine into a single larger load.
fossil-db results on Lunarlake:
Totals:
Instrs: 146692608 -> 146246432 (-0.30%); split: -0.33%, +0.02%
Subgroup size: 11100528 -> 11100512 (-0.00%)
Send messages: 7003425 -> 6862529 (-2.01%); split: -2.01%, +0.00%
Cycle count: 22396273274 -> 22523048654 (+0.57%); split: -1.08%, +1.64%
Spill count: 67671 -> 67594 (-0.11%); split: -1.59%, +1.48%
Fill count: 128999 -> 130223 (+0.95%); split: -1.73%, +2.68%
Scratch Memory Size: 5986304 -> 6042624 (+0.94%); split: -1.40%, +2.34%
Max live registers: 48898858 -> 48881655 (-0.04%); split: -0.05%, +0.01%
Non SSA regs after NIR: 172397792 -> 167577380 (-2.80%); split: -2.80%, +0.00%
Totals from 451003 (80.87% of 557667) affected shaders:
Instrs: 134111754 -> 133665578 (-0.33%); split: -0.36%, +0.03%
Subgroup size: 9039104 -> 9039088 (-0.00%)
Send messages: 6127775 -> 5986879 (-2.30%); split: -2.30%, +0.00%
Cycle count: 20306336726 -> 20433112106 (+0.62%); split: -1.19%, +1.81%
Spill count: 56230 -> 56153 (-0.14%); split: -1.92%, +1.78%
Fill count: 112920 -> 114144 (+1.08%); split: -1.97%, +3.06%
Scratch Memory Size: 3769344 -> 3825664 (+1.49%); split: -2.23%, +3.72%
Max live registers: 43750259 -> 43733056 (-0.04%); split: -0.05%, +0.01%
Non SSA regs after NIR: 158449343 -> 153628931 (-3.04%); split: -3.04%, +0.00%
In particular, sends get cut by 20.85% for Borderlands 3 DX12, 13.82%
on Cyberpunk 2077, 10.75% on Strange Brigade, and 10.20% on Red Dead
Redemption 2. Yet, spill/fills remain about the same.
fossil-db results on Alchemist are similar though not quite as good.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32315>
nir_opt_load_store_vectorize checks for potential address wrapping
when vectorizing two loads ("low" and "high"). It looks for cases where
"low" might have a large address, and "high" has a positive offset
which, when added together, could trigger integer wraparound. The issue
here is that if the large address of "low" was considered out-of-bounds,
adding offset could wrap around to a small address, which might actually
be in-bounds. Thus, when loaded separately, "low" will fail and trigger
robustness out-of-bound-read behavior, but "high" would read correctly.
When vectorized, the entire load would fail. This is explicitly tested
for with 32-bit SSBO addresses in the Vulkan CTS.
However, anv's 64-bit global addresses and VMA handling effectively
prevent this case. Addresses 0-4095 are a reserved page so that if
people try to use 0 as a NULL pointer, it never maps to a valid BO.
That alone guarantees that the above case where "high" gets a small
address would never be in-bounds, so we don't need to check for it.
In fact, we allocate most user allocations out of high addresses,
and have specialized allocation heaps for certain types of GPU data
structures in the lower GB of memory. For a load to wrap around and
successfully land in the right heap, it would have to load gigabytes.
Disabling this allows load vectorization and overfetching in more cases.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32315>
Just calculate the block size using util_logbase2() - it's simpler.
Also drop the name "oword" as this refers to legacy HDC messages,
rather than the newer LSC "vector size" field.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32315>
This will matter more with overfetching, where we may suggest loading
additional data that we don't actually need for vectorization purposes.
We want to make sure that push ranges have the data we actually need;
any extra padding is irrelevant.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32315>
i965 used to upload its own regular GL uniforms and push those in
addition to UBO ranges. st/mesa instead uploads regular uniforms
and presents those to use as UBO 0. So this really isn't a thing
anymore.
nir_intrinsic_load_uniform is still used today but it represents
Vulkan push constants. anv_nir_compute_push_layout already takes
care of ensuring too many ranges aren't present, so it doesn't need
the pass to do so. iris doesn't use this intrinsic at all.
We can also drop the compute shader check, because neither iris nor
anv use UBO push analysis for compute shaders - except for anv's
internal kernels, which already have well specified push layouts.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32315>
Moving it to intel_shader_enums.h
The plan is to make it visible to OpenCL shaders.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32329>
There should be no functional change here. This is just trying to make
things more clear, we use the compiled value if != BRW_SOMETIMES and
otherwise use the dynamically computed flags.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32329>
These fields are only valid in certain formats, so set them accordingly.
Note the check if !is_send is used because FORMAT_BASIC is reused for
SEND/SENDS in some platforms. If we start to see more cases like that,
we can create a new FORMAT for it.
The cond_modifier is trickier because on top of that, it is not valid
for 64-bit immediates in some platforms. Found when EU validation
complained about moving 64-bit immediates with higher bits.
Fixes: e4440df2d8 ("intel/brw: Add pred/cmod/sat to brw_hw_decoded_inst")
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32287>
This will need to be set to true when the GLSL linker lowers IO, which
can later be unlowered by st/mesa, and then drivers can lower it again
without load_interpolated_input. Therefore, it can't be a global
immutable option.
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32229>
There are new combinations of ordered and unordered dependencies
available for the instructions to use, which among others include:
- combining FLOAT and INT pipe deps in SENDs;
- combining SRC mode deps in regular instructions for the inferred type.
This patch enables a couple of tests checking for the first case.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31375>
We were accidentally doing a signed integer comparison here for ult32,
or a sign-extending shift for ushr.
One notable bit of fallout was that load_global_uniform_block_intel
address calculations broke on platforms that don't have native 64-bit
integer support, as the iadd64 lowering for "do I need to carry?" was
using ult32...and performing the wrong comparison. We spotted this in
Borderlands 3 on Alchemist once we turned on other optimizations.
Thanks to Lionel Landwerlin for helping spot the problem!
Fixes: c7b312ad45 ("brw: factor out source extraction for rematerialization")
Fixes: 339630ab05 ("brw: enable A64 loads source rematerialization")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31995>
This was triggering an assertion in the fs_builder::MOV helper that
the destination stride can't be 0 when dispatch_width > 1. What we
want to do is copy the single 64-bit channel of data from the UNIFORM
file to a VGRF. We can use a SIMD1 builder for that.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/31995>
This fixes a corner case of the LNL sub-dword integer restrictions
that wasn't being detected by has_subdword_integer_region_restriction(),
specifically:
> if(Src.Type==Byte && Dst.Type==Byte && Dst.Stride==1 && W!=2) {
> // ...
> if(Src.Stride == 2) && (Src.UniformStride) && (Dst.SubReg%32 == Src.SubReg/2 ) { Allowed }
> // ...
> }
All the other restrictions that require agreement between the SubReg
number of source and destination only affect sources with a stride
greater than a dword, which is why
has_subdword_integer_region_restriction() was returning false except
when "byte_stride(srcs[i]) >= 4" evaluated to true, but as implied by
the pseudocode above, in the particular case of a packed byte
destination, the restriction applies for source strides as narrow as
2B.
The form of the equation that relates the subreg numbers is consistent
with the existing calculations in brw_fs_lower_regioning (see
required_src_byte_offset()), we just need to enable lowering for this
corner case, and change lower_dst_region() to call lower_instruction()
recursively, since some of the cases where we break this restriction
are copy instructions introduced by brw_fs_lower_regioning() itself
trying to lower other instructions with byte destinations.
This fixes some Vulkan CTS test-cases that were hitting these
restrictions with byte data types.
Fixes: 217d412360 ("intel/fs/gfx20+: Implement sub-dword integer regioning restrictions.")
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30630>
All of the spilling code should work with physical register units
because for example SEND messages will expect a physical register as
destination.
So always allocate a full physical register for the spilled/unspilled
values and adjust the offsets of the registers to physical sizes too.
Cc: mesa-stable
Fixes: aa494cba ("brw: align spilling offsets to physical register sizes")
Closes: mesa/mesa#11967
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Found-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32124>
While the documentation says to use NUM_SIMD_LANES_PER_DSS for the stack
address calculation, what the HW actually uses is
NUM_SYNC_STACKID_PER_DSS. The former may vary depending on the platform,
while the latter is fixed to 2048 for all current platforms.
Fixes: 6c84cbd8c9 ("intel/dev/xe: Set max_eus_per_subslice using topology query")
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32049>