Mainly this involves changing 'struct state' so that the dep_ready
array is allocated with a dynamic size based on the number of VGRFs of
the program instead of assuming a fixed XE3_MAX_GRF count of GRF
dependencies. VGRF register dependencies are then handled by using
one dep_ready entry per VGRF allocation instead of one per hardware
register.
The ability to use the performance analysis pass pre-regalloc will
mostly be useful on xe3+, but this also has the side effect of saving
some memory on xe2 and earlier platforms since we no longer need to
allocate XE3_MAX_GRF dep_ready entries for them.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
Reduce the cycle-count cost estimate used by the performance model for
render target writes on xe3+ in order to match the real-world
observation of shaders with latency lower than the previously
estimated cost of its render target write.
In a shader used by Factorio this would have led us to incorrectly
model the shader as fillrate-bound, even though in reality the shader
is EU-bound and benefits from the higher parallelism of SIMD32, so the
subsequent commit that re-enables the static analysis-based SIMD32
heuristic on PTL would lead to a ~2% regression without this tweak.
There appear to be no other regressions nor other changes from this in
combination with the subsequent commit that enables it to have an
effect, but it is possible that the real cycle count cost of a render
target write still lies below the estimated value, ~400 is just the
upper bound that can be inferred from the behavior of this test case.
Acked-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
Currently on platforms without EU fusion (all platforms other than
gfx12.x) we were using a constant discard_weight = 1.0 regardless of
SIMD width. This was far from ideal, in particular since it made the
performance analysis pass fully insensitive to the presence of discard
jumps, even though the scheduler is able to move code past a discard
statement so the range of the program under discard control flow can
vary and have a material effect on the relative performance of SIMD16
vs. SIMD32, since the scheduler is typically more constrained in
SIMD32 dispatch mode.
In order to fix this use a discard_weight lower than 1.0 for all
dispatch modes, so that the performance analysis pass accounts for the
presence and range of discard control flow. In addition use a lower
discard_weight for SIMD16 dispatch like we do on Gfx12.x in order to
account for the higher likelihood of divergent discard in SIMD32 mode.
The specific weights were determined iteratively on PTL based on the
final FPS result of several traces that are sensitive to the dispatch
width of one or more fragment shaders that use discard, in order to
ensure that in none of those cases we end up using the
lower-performing dispatch width variant. This avoids regressions
between 3.7% and 0.8% in Superposition-trace-dx11-2160p-extreme,
BaldursGate3-trace-dx11-1440p-ultra and
MetroExodus-trace-dx11-2160p-ultra after enabling the static
analysis-based SIMD32 heuristic in PTL.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com> (v1)
v2: Limit to xe3+ for now since performance effect seems to be a wash
on xe2.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
The LSC implements several optimizations for atomic operations on a
memory addresses that are uniform across all lanes, in which case its
cost is approximately O(1) instead of O(exec_size). Even cases where
memory offsets are non-uniform but packed in a cacheline appear to
have a cost that is non-linear with the number of lanes.
In order to approximate this behavior more closely approximate its
back-end cost as roughly 1300 cycles instead of the previous 400 *
exec_size/8. This fixes some cases where we were incorrectly
predicting the SIMD32 shader would be bound by the throughput of LSC
atomic operations, even though the observed cost per lane of the LSC
operations was significantly lower in SIMD32 mode so it would have the
best performance.
Clearly this is still a rough approximation and it might be possible
to obtain a more accurate result by plumbing divergence analysis data
all the way down to codegen, however the goal of the performance
analysis pass isn't to provide an exact prediction of the performance
of a shader (that's not really possible in general via static analysis
without solving the halting problem), but to provide a good enough
approximation at a low cost -- And the constant approximation seems to
be strictly better in practice than the approximation we were using
before, there appear to be no regressions from this change, and
ShadowTombRaider-trace-dx11-2160p-ultra shows 5.7% better performance
on PTL with a subsequent commit that re-enables the use of the static
analysis-based SIMD32 heuristic on xe3+.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
This extends the performance analysis pass used in previous
generations to make it more useful to deal with the performance
trade-off encountered on xe3 hardware as a result of VRT. VRT allows
the driver to request a per-thread GRF allocation different from the
128 GRFs that were typical in previous platforms, but this comes at
either a thread parallelism cost or benefit depending on the number of
GRF register blocks requested.
This makes a number of decisions more difficult for the compiler since
certain optimizations potentially trade off run-time in a thread
against the total number of threads that can run in parallel
(e.g. consider scheduling and how reordering an instruction to avoid a
stall can increase GRF use and therefore reduce thread-level
parallelism when trying to improve instruction-level parallelism).
This patch provides a simple heuristic tool to account for the
combined interaction of register pressure and other single-threaded
factors that affect performance. This is expressed with the
redefinition of the pre-existing brw_performance::throughput estimate
as the number of invocations per cycle per EU that would be achieved
if there were enough threads to reach full load (in this sense this is
to be considered a heuristic since the penalty from VRT may be lower
than expected from this model at low EU load).
This will be used e.g. in order to decide whether to use a more
aggressive latency-minimizing mode during scheduling or a mode more
effective at minimizing register pressure (it makes sense to take the
path that will lead to the most invocations being serviced per cycle
while under load). This also allows us to re-enable the old PS SIMD32
heuristic on xe3+, and due to this change it is able to identify cases
where the combined effect of poorer scheduling and higher GRF use of
the SIMD32 variant makes it more favorable to use SIMD16 only (see
last patch of the MR for details and numbers).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
This causes the graph coloring allocator to use the optimistic
coloring codepath for all nodes whose total Q value exceeds the
threshold of 96 GRFs, in order to do a better job at minimizing the
register requirement of programs even when they are trivially
colorable. At the threshold of 96 GRFs the number of threads
available per EU starts decreasing as the number of register blocks
requested by the program increases, so decreasing the number of
registers can increase performance.
That showed up in some test cases as a performance inversion from the
enabling of VRT, since the extension of the register set to 256 GRFs
has the side effect of making some non-trivially colorable programs
trivially colorable, which would cause the register allocator to do a
worse job at ordering the (trivial) allocations due to the optimistic
coloring path being skipped, leading to increased register use and
reduced performance.
The following Traci test cases improve significantly as a result of
this change (4 iterations, 5% significance):
MetroExodus-trace-dx11-2160p-ultra: 1.90% ±0.85%
BaldursGate3-trace-dx11-1440p-ultra: 1.47% ±0.38%
Palworld-trace-dx11-1080p-med: 1.01% ±0.09%
TerminatorResistance-trace-dx11-2160p-ultra: 0.95% ±0.29%
Control-trace-dx11-1440p-high: 0.87% ±0.50%
Even though lowering the P value threshold is expected to have a cost
in compile time theoretically due to the increased use of the slower
optimistic path of the graph coloring allocator, this doesn't actually
show up in my numbers, my shader-db and fossil-db compile-time numbers
don't show any statistically significant change (13 iterations, 5%
significance).
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
This is helpful for the driver to have the option to provide a custom
threshold for the PQ test performed by the graph coloring algorithm.
A threshold lower than the physical number of registers is helpful on
platforms where the number of registers used can impose a limit on the
thread parallelism of the program. In such platforms even though a
passing PQ test guarantees that the node can be pushed onto the stack
and neglected while coloring the remaining nodes, the ordering in
which this happen can have a dramatic effect in the register pressure
of the resulting shader and therefore also on the thread parallelism
of the program.
Setting a P value threshold lower than the real P value will cause
nodes with Q value above the threshold to use the existing optimistic
coloring heuristic that takes the effort of ordering nodes in the
stack by Q value, in order to do a better job at minimizing the total
register requirement of the program. Even though this causes us to
hit the optimistic codepaths for trivially colorable nodes the
interference graph is still guaranteed to be trivially colorable if it
was trivially colorable without the override.
The use of a threshold lower than the real P value will come at a
compile-time performance cost, the specific trade-off between
compile-time and run-time can be adjusted by the driver based on the
number of registers available to each thread without causing a hit to
thread parallelism.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
This defines a new pre-RA scheduling mode similar to BRW_SCHEDULE_PRE
but more aggressive at optimizing for minimum latency rather than
minimum register usage. The main motivation is that on recent xe3
platforms we use a register allocation heuristic that packs variables
more tightly at the bottom of the register file instead of the
round-robin heuristic we used on previous platforms, since as a result
of VRT there is a parallelism penalty when a program uses more GRF
registers than necessary. Unfortunately the xe3 tight-packing
heuristic severely constrains the work of the post-RA scheduler due to
the false dependencies introduced during register allocation, so we
can do a better job by making the scheduler aware of instruction
latencies before the register allocator introduces any false
dependencies.
This can lead to higher register pressure, but only when the scheduler
decides it could save cycles by extending a live range. It makes
sense to preserve the preexisting BRW_SCHEDULE_PRE as a separate mode
since some workloads can still benefit from neglecting latencies
pre-RA due to the trade-off mentioned between parallelism and GRF use,
a future commit will introduce a more accurate estimate of the
expected relative performance of BRW_SCHEDULE_PRE
vs. BRW_SCHEDULE_PRE_LATENCY taking into account this trade-off.
In theory this could also be helpful on earlier pre-xe3 platforms, but
the benefit should be significantly smaller due to the different RA
heuristic so it hasn't been tested extensively pre-xe3.
The following Traci tests are improved significantly by this change on
PTL (nearly all tests that run on my system are affected positively):
Ghostrunner2-trace-dx11-1440p-ultra: 7.12% ±0.36%
SpaceEngineers-trace-dx11-2160p-high: 5.77% ±0.43%
HogwartsLegacy-trace-dx12-1080p-ultra: 4.40% ±0.03%
Naraka-trace-dx11-1440p-highest: 3.06% ±0.43%
MetroExodus-trace-dx11-2160p-ultra: 2.26% ±0.60%
Fortnite-trace-dx11-2160p-epix: 2.12% ±0.53%
Nba2K23-trace-dx11-2160p-ultra: 1.98% ±0.30%
Control-trace-dx11-1440p-high: 1.93% ±0.36%
GodOfWar-trace-dx11-2160p-ultra: 1.62% ±0.47%
TotalWarPharaoh-trace-dx11-1440p-ultra: 1.55% ±0.18%
MountAndBlade2-trace-dx11-1440p-veryhigh: 1.51% ±0.37%
Destiny2-trace-dx11-1440p-highest: 1.44% ±0.34%
GtaV-trace-dx11-2160p-ultra: 1.26% ±0.27%
ShadowTombRaider-trace-dx11-2160p-ultra: 1.10% ±0.58%
Borderlands3-trace-dx11-2160p-ultra: 0.95% ±0.43%
TerminatorResistance-trace-dx11-2160p-ultra: 0.87% ±0.22%
BaldursGate3-trace-dx11-1440p-ultra: 0.84% ±0.28%
CitiesSkylines2-trace-dx11-1440p-high: 0.82% ±0.22%
PubG-trace-dx11-1440p-ultra: 0.72% ±0.37%
Palworld-trace-dx11-1080p-med: 0.71% ±0.26%
Superposition-trace-dx11-2160p-extreme: 0.69% ±0.19%
The compile-time cost of shader-db increases significantly by 1.85%
after this commit (14 iterations, 5% significance), the compile-time
of fossil-db doesn't change significantly in my setup.
v2: Addressed interaction with 81594d0db1,
since the code that calculates deps, delays and exits is no longer
mode-independent after this change. Instead of reverting that
commit (which is non-trivial and would have a greater compile-time
hit) simply reconstruct the scheduler object during the transition
between BRW_SCHEDULE_PRE_LATENCY and any other PRE mode that
doesn't require instruction latencies.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
We were currently treating explicit flag writes and reads as a full
scheduler barrier, which is unnecessary since the tracking we already
do handles explicit flag access correctly so there is no reason for
taking a possibly large performance hit from add_barrier_deps().
Found by inspection while trying to understand the poor scheduling of
some fragment shaders. Improves performance by a small but
statistically significant amount (4 iterations, 5% significance) for
the following Traci tests in combination with a subsequent commit that
makes the pre-RA scheduler sensitive to instruction latencies:
SpaceEngineers-trace-dx11-2160p-high: 0.66% ±0.30%
MountAndBlade2-trace-dx11-1440p-veryhigh: 0.62% ±0.23%
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
We weren't handling the SHADER_OPCODE_SEND_GATHER instruction in the
instruction scheduler and this was leading to reduced performance in
many programs since SEND instructions have the longest latency and
tend to be among the most critical to schedule efficiently. Handle
SENDG similarly to SEND since the timings of both instructions are
mostly bound by the shared function which doesn't care if the message
was sent by SEND or SENDG.
Improves performance significantly in the following Traci traces (4
iterations, 5% significance), most of them regressions from SENDG
being enabled:
MetroExodus-trace-dx11-2160p-ultra: 1.99% ±0.88%
HogwartsLegacy-trace-dx12-1080p-ultra: 1.33% ±0.20%
GtaV-trace-dx11-2160p-ultra: 1.12% ±0.19%
Borderlands3-trace-dx11-2160p-ultra: 1.00% ±0.58%
TerminatorResistance-trace-dx11-2160p-ultra: 0.98% ±0.27%
Control-trace-dx11-1440p-high: 0.91% ±0.36%
Naraka-trace-dx11-1440p-highest: 0.90% ±0.30%
Ghostrunner2-trace-dx11-1440p-ultra: 0.87% ±0.38%
Palworld-trace-dx11-1080p-med: 0.71% ±0.17%
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36618>
Not required since we've disabled maintenance8 support.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: d39e443ef8 ("anv: add infrastructure for common vk_pipeline")
Reviewed-by: Ivan Briano <ivan.briano@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/37242>
refcounting uses atomics, which are a significant source of CPU overhead
in many applications. by adding a method to inform the driver that
the frontend has released ownership of a buffer, all other refcounting
for the buffer can be eliminated
see MR for more details
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36296>
A barrier between two lds/vmem instructions needs to ensure that the
second starts after the first finishes, which means that we can't just
skip workgroup-scope vmem barriers if there is a lds instruction later.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36491>
For code like:
if (cond) {
val = load()
}
use(val)
The "use(val)" now has a similar cost to a use inside the IF.
Signed-off-by: Rhys Perry <pendingchaos02@gmail.com>
Reviewed-by: Georg Lehmann <dadschoorse@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36491>
* optimize not(compare(a,b)), nir_opt_algebraic does this only if the
comparison result is used only once, but on a vector arch we still get
an advantage when doing this, because it reduces dependencies.
* optimize b2f32(compare(a,b)), this is r600 specific
Signed-off-by: Gert Wollny <gert.wollny@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/37205>
This was incorrect (it also lowered int64 reductions/scans), and the only
user can just use the general callback to precisely only lower what it wants.
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/37164>
This was added with the goal to eventually replace the per
pass subgroup/ballot size options, but that won't work because
some backends don't have a fixed subgroup size across the compilation
process.
It was also mostly added to hack around mesa state tracker behavior,
and we have a better solution there now.
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/37164>
We skip iterations with ifs.
These can be optimized aways after the subgroup size is known.
Every driver should do that because applications depend on it anyway.
Reviewed-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Reviewed-by: Emma Anholt <emma@anholt.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/37164>