instead of relying on an implicit value which doesn't make much sense.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33067>
This limits the address register to simple cases inside a block.
Validation ensures that the address register is only written once and
read once.
Instruction scheduling makes sure that instructions using the address
register in the generator are not scheduled while there is an usage of
the register in the IR.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28199>
We want to reuse the brw::nr field as a virtual address register
identifer. So we can't use brw::file=ARF brw::nr=ADDRESS.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28199>
Rather than emitting FS_OPCODE_UNIFORM_PULL_CONSTANT_LOAD to do block
loads that were cacheline aligned, loading entire cachelines at a time,
we now rely on NIR passes to group, CSE, and vectorize things into
appropriately sized blocks. This means that we'll usually still load
a cacheline, but we may load only 32B if we don't actually need anything
from the full 64B. Prior to Xe2, this saves us registers, and it ought
to save us some bandwidth as well as the response length can be lowered.
The cacheline-aligning hack was the main reason not to simply call
fs_nir_emit_memory_access(), so now we do that instead, porting yet
one more thing to the common memory opcode framework.
We unfortunately still emit the old FS_OPCODE_UNIFORM_PULL_CONSTANT_LOAD
opcode for non-block intrinsics. We'd have to clean up 16-bit handling
among other things in order to eliminate this, but we should in the
future.
fossil-db results on Alchemist for this and the previous patch together:
Instrs: 161481888 -> 161297588 (-0.11%); split: -0.12%, +0.01%
Subgroup size: 8102976 -> 8103000 (+0.00%)
Send messages: 7895489 -> 7846178 (-0.62%); split: -0.67%, +0.05%
Cycle count: 16583127302 -> 16703162264 (+0.72%); split: -0.57%, +1.29%
Spill count: 72316 -> 67212 (-7.06%); split: -7.25%, +0.19%
Fill count: 134457 -> 125970 (-6.31%); split: -6.83%, +0.52%
Scratch Memory Size: 4093952 -> 3787776 (-7.48%); split: -7.53%, +0.05%
Max live registers: 33037765 -> 32947425 (-0.27%); split: -0.28%, +0.00%
Max dispatch width: 5780288 -> 5778536 (-0.03%); split: +0.17%, -0.20%
Non SSA regs after NIR: 177862542 -> 178816944 (+0.54%); split: -0.06%, +0.60%
In particular, several titles see incredible reductions in spill/fills:
Shadow of the Tomb Raider: -65.96% / -65.44%
Batman: Arkham City GOTY: -53.49% / -28.57%
Witcher 3: -16.33% / -14.29%
Total War: Warhammer III: -9.60% / -10.14%
Assassins Creed Odyssey: -6.50% / -9.92%
Red Dead Redemption 2: -6.77% / -8.88%
Far Cry: New Dawn: -7.97% / -4.53%
Improves performance in many games on Arc A750:
Cyberpunk 2077: 5.8%
Witcher 3: 4%
Shadow of the Tomb Raider: 3.3%
Assassins Creed: Valhalla: 3%
Spiderman Remastered: 2.75%
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
The hope here is to replace our backend handling for loading whole
cachelines at a time from UBOs into NIR-based handling, which plays
nicely with the NIR load/store vectorizer.
Rounding down offsets to multiples of 64B allows us to globally CSE
UBO loads across basic blocks. This is really useful. However, blindly
rounding down the offset to a multiple of 64B can trigger anti-patterns
where...a single unaligned memory load could have hit all the necessary
data, but rounding it down split it into two loads.
By moving this to NIR, we gain more control of the interplay between
nir_opt_load_store_vectorize and this rebasing and CSE'ing. The backend
can then simply load between nir_def_{first,last}_component_read() and
trust that our NIR has the loads blockified appropriately.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
This will translate to HDC Constant Cache loads or LSC UGM loads.
On LSC, MEMORY_MODE_UNTYPED would be fine, but for HDC we need to
distinguish between the regular and constant cache data ports.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
The NIR vectorizer may produce block loads with unread trailing
components. Upcoming passes may produce unread leading components
as well. With a bit of finesse, we can skip loading those, and only
bother with the ones we actually need. This can sometimes save us on
loads and MOVs.
v2: Skip this for SLM reads on pre-LSC platforms (caught by Lionel).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
If we pass an immediate, just trivially return that immediate.
This preserves the property that if x was an IMM, emit_uniformize(x)
will also be an IMM, without the need for optimizations to eliminate
unnecessary operations. That way, you can call emit_uniformize() on
a value and still check whether it's constant afterwards.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
We return an immediate for 32-bit constant values, but fall back to
calling get_nir_src() for other values, as 64-bit, and even 8-bit
immediates have odd restrictions. We could probably support 16-bit
here without too many issues, but we leave it be for now.
This makes it usable for case where we'd like to get constants for
32-bit values but where it may be a different bit-size too.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
We were already skipping unread trailing components, but now we skip
them on both ends.
About -3.5% spills on Shadow of the Tomb Raider on Alchemist (mostly a
wash elsewhere, but it will help additional shaders with later patches).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
HDC doesn't support block loads/stores with sub-DWord (<4B) aligned
offsets, and shared local memory has to use the Aligned OWord Block
messages which require OWord (16B) alignment.
Make the validator detect this case and say no. Also make the lowering
code assert that the alignment is valid as a second line of defense.
LSC has no such restrictions.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32888>
Just assert that the array will fit whatever the MAX is for a given
Gfx version.
Fixes: 172c1ab984 ("intel/elk: Add ELK_MAX_MRF_ALL for static allocating arrays")
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32978>
Was causing trouble in some build configurations, we don't really need
them. Unless there's a good reason, defaults to use ralloc for
consistency with the larger codebase.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Antonio Ospite <None>
Reviewed-by: Kenneth Graunke <None>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32916>
Was causing trouble in some build configurations, we don't really need
them. Use ralloc for consistency.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Antonio Ospite <None>
Reviewed-by: Kenneth Graunke <None>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32916>
Fix defect reported by Coverity Scan.
Side effect in assertion (ASSERT_SIDE_EFFECT)
assert_side_effect: Argument ++eot_count of assert() has a side effect.
The containing function might work differently in a non-debug build.
Fixes: ebd6738260 ("intel/elk/chv: Implement WaClearArfDependenciesBeforeEot")
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32884>
Marek recently changed hole_size to be signed, rather than unsigned.
A negative hole_size means that the two loads overlap - and thus are
prime candidates to be combined.
My original hole_size handling was:
if hole_size > 4 * (8 - low->num_components) then don't vectorize
For non-overlapping loads, this worked: NIR's largest vector is vec16,
and if low was already a vec16, combining it with anything would exceed
that, so it'd never be considered. That meant low would always be a
vec8 or less, so (8 - low->num_components) was a positive number.
Now that we see overlapping loads, we can see a vec16 low, vec4 high,
and also a negative hole size, giving us fun comparisons like:
-16 > 4 * (8 - 16) => -16 > -32 => true, don't vectorize
Which is absolutely the wrong thing to do, because the high load's data
is entirely included within the former load's data.
The idea here was to make sure the second load would be able to pack at
least one component into the first's V8 result. But even this isn't the
best, because...even if it's simply adjacent, doing one V16 load is more
efficient than requesting two back to back V8 loads.
So, we just simplify down to a static check: if there's an entire V8 of
hole, don't vectorize. This already won't happen because the core pass
has max_hole set to 28 bytes (7 32-bit components), but that could
change based on the needs of other drivers, so let's be defensive.
fossil-db results on Alchemist:
Instrs: 161533978 -> 161295137 (-0.15%); split: -0.20%, +0.05%
Subgroup size: 8092544 -> 8092568 (+0.00%)
Send messages: 7915233 -> 7844503 (-0.89%); split: -0.94%, +0.05%
Cycle count: 16577700697 -> 16702609256 (+0.75%); split: -0.59%, +1.35%
Spill count: 72338 -> 67226 (-7.07%); split: -7.36%, +0.29%
Fill count: 134058 -> 125980 (-6.03%); split: -6.83%, +0.80%
Scratch Memory Size: 4092928 -> 3786752 (-7.48%); split: -7.53%, +0.05%
Max live registers: 33031460 -> 32945994 (-0.26%); split: -0.27%, +0.01%
Max dispatch width: 5778384 -> 5778536 (+0.00%); split: +0.26%, -0.26%
Non SSA regs after NIR: 179809505 -> 152735471 (-15.06%); split: -15.08%, +0.03%
Fixes: c21bc65ba7 ("nir/opt_load_store_vectorize: make hole_size signed to indicate overlapping loads")
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32932>
When a SEND instruction is a EOT, the scoreboard lowering will not
allocate a new SBID for it, since nothing needs to wait for it. In
Gfx12 this allowed the SEND to get out-of-order $.dst or $.src
dependencies.
Starting on Xe2+ this is not supported anymore, in favor of supporting
more combined modes.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32712>
The push_constant_loc[] array is always an identity mapping these days,
so it's kind of pointless. Just use the original uniform number and
skip the unnecessary "remap" step. With that gone, and shrinking UBO
ranges gone, assign_constant_locations() is now empty and can be removed
as well.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32841>
Now that we never shrink ranges in the backend, we never lower push
constants to pull constants late in the backend either. get_pull_loc
will never return true, and so all of brw_lower_constant_loads becomes
a noop.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32841>
Back in the bad old days (vec4?) we had a bunch of smarts in the backend
to dead code eliminate unused vector components and re-pack regular
uniforms, so we really couldn't decide how much data we were pushing
until very late in the backend. Nowadays we have none of that - we do
all of our elimination and packing in NIR. anv shrinks ranges to deal
with Vulkan API push constants, and iris treats everything as a UBO and
as of the previous commit will also shrink appropriately.
So we don't need to do this anymore...which will let us simplify quite
a bit of code.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32841>
anv already does this limiting, since it needs to handle non-UBO push
constants as well. iris treats everything as a UBO, but doesn't have
a limiter and was relying on the backend to handle it.
Do this in the NIR pass so that we can eliminate the backend code.
It's not necessary for anv, but handling it here is simple and less
error prone for iris, which calls this in a number of places. We know
we need to limit things to this much; anv can limit more if needed.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32841>
A negative hole size means the loads overlap. This will be used by drivers
to handle overlapping loads in the callback easily.
Reviewed-by: Mel Henning <drawoc@darkrefraction.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32699>