2012-05-10 16:10:15 -07:00
|
|
|
/*
|
|
|
|
|
* Copyright © 2012 Intel Corporation
|
|
|
|
|
*
|
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
|
*
|
|
|
|
|
* The above copyright notice and this permission notice (including the next
|
|
|
|
|
* paragraph) shall be included in all copies or substantial portions of the
|
|
|
|
|
* Software.
|
|
|
|
|
*
|
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
|
|
|
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
|
|
|
|
* IN THE SOFTWARE.
|
|
|
|
|
*/
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
#define XXH_INLINE_ALL
|
|
|
|
|
#include "util/xxhash.h"
|
|
|
|
|
|
2012-05-10 16:10:15 -07:00
|
|
|
#include "brw_fs.h"
|
2025-01-15 08:20:46 -08:00
|
|
|
#include "brw_builder.h"
|
2012-10-03 13:03:12 -07:00
|
|
|
#include "brw_cfg.h"
|
2012-05-10 16:10:15 -07:00
|
|
|
|
2024-07-13 00:19:44 -07:00
|
|
|
/** @file
|
2012-05-10 16:10:15 -07:00
|
|
|
*
|
2024-04-09 23:21:24 -07:00
|
|
|
* Support for SSA-based global Common Subexpression Elimination (CSE).
|
2012-05-10 16:10:15 -07:00
|
|
|
*/
|
|
|
|
|
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
struct remap_entry {
|
2024-12-07 00:23:07 -08:00
|
|
|
brw_inst *inst;
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
bblock_t *block;
|
|
|
|
|
enum brw_reg_type type;
|
|
|
|
|
unsigned nr;
|
|
|
|
|
bool negate;
|
|
|
|
|
bool still_used;
|
|
|
|
|
};
|
|
|
|
|
|
2014-03-30 12:41:55 -07:00
|
|
|
static bool
|
2024-12-07 00:23:07 -08:00
|
|
|
is_expression(const fs_visitor *v, const brw_inst *const inst)
|
2012-05-10 16:10:15 -07:00
|
|
|
{
|
|
|
|
|
switch (inst->opcode) {
|
2014-04-03 14:29:30 -07:00
|
|
|
case BRW_OPCODE_MOV:
|
2012-05-10 16:10:15 -07:00
|
|
|
case BRW_OPCODE_SEL:
|
|
|
|
|
case BRW_OPCODE_NOT:
|
|
|
|
|
case BRW_OPCODE_AND:
|
|
|
|
|
case BRW_OPCODE_OR:
|
|
|
|
|
case BRW_OPCODE_XOR:
|
|
|
|
|
case BRW_OPCODE_SHR:
|
|
|
|
|
case BRW_OPCODE_SHL:
|
|
|
|
|
case BRW_OPCODE_ASR:
|
2024-03-06 01:36:10 -08:00
|
|
|
case BRW_OPCODE_ROR:
|
|
|
|
|
case BRW_OPCODE_ROL:
|
i965/fs: Perform CSE on CMP(N) instructions.
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
2013-10-20 11:38:17 -07:00
|
|
|
case BRW_OPCODE_CMP:
|
|
|
|
|
case BRW_OPCODE_CMPN:
|
2024-03-06 01:36:10 -08:00
|
|
|
case BRW_OPCODE_CSEL:
|
|
|
|
|
case BRW_OPCODE_BFREV:
|
|
|
|
|
case BRW_OPCODE_BFE:
|
|
|
|
|
case BRW_OPCODE_BFI1:
|
|
|
|
|
case BRW_OPCODE_BFI2:
|
2012-05-10 16:10:15 -07:00
|
|
|
case BRW_OPCODE_ADD:
|
|
|
|
|
case BRW_OPCODE_MUL:
|
2015-08-04 19:04:55 +03:00
|
|
|
case SHADER_OPCODE_MULH:
|
2024-03-06 01:36:10 -08:00
|
|
|
case BRW_OPCODE_AVG:
|
2012-05-10 16:10:15 -07:00
|
|
|
case BRW_OPCODE_FRC:
|
2024-03-06 01:36:10 -08:00
|
|
|
case BRW_OPCODE_LZD:
|
|
|
|
|
case BRW_OPCODE_FBH:
|
|
|
|
|
case BRW_OPCODE_FBL:
|
|
|
|
|
case BRW_OPCODE_CBIT:
|
intel/brw: Support CSE of ADD3
This one is a bit more complex in that we need to handle 3-source
commutative opcodes. But it's also quite useful:
fossil-db results on Alchemist (A770):
Instrs: 151659750 -> 150164959 (-0.99%); split: -0.99%, +0.01%
Cycles: 12822686329 -> 12574996669 (-1.93%); split: -2.05%, +0.12%
Subgroup size: 7589608 -> 7589592 (-0.00%)
Send messages: 7375047 -> 7375053 (+0.00%); split: -0.00%, +0.00%
Loop count: 46313 -> 46315 (+0.00%); split: -0.01%, +0.01%
Spill count: 110184 -> 54670 (-50.38%); split: -50.79%, +0.41%
Fill count: 213724 -> 104802 (-50.96%); split: -51.43%, +0.47%
Scratch Memory Size: 9406464 -> 3375104 (-64.12%); split: -64.35%, +0.23%
Our older Shadow of the Tomb Raider fossil is particularly helped with
over a 90% reduction in scratch access (spills, fills, and scratch
size). However, benchmarking in the actual game shows no change in
performance. We're thinking the game's shaders have been updated since
our capture.
Ian noted that there was a bug here where we'd accidentally CSE two ADD3
instructions with null destinations and different src[2] that couldn't
be dead code eliminated due to conditional mods. However, this is only
a bug in the new cse_defs pass so we don't need to nominate this for
stable branches.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29848>
2024-06-16 02:45:53 -07:00
|
|
|
case BRW_OPCODE_ADD3:
|
2012-05-10 16:10:15 -07:00
|
|
|
case BRW_OPCODE_RNDU:
|
|
|
|
|
case BRW_OPCODE_RNDD:
|
|
|
|
|
case BRW_OPCODE_RNDE:
|
|
|
|
|
case BRW_OPCODE_RNDZ:
|
|
|
|
|
case BRW_OPCODE_LINE:
|
|
|
|
|
case BRW_OPCODE_PLN:
|
|
|
|
|
case BRW_OPCODE_MAD:
|
2012-12-02 00:08:15 -08:00
|
|
|
case BRW_OPCODE_LRP:
|
2016-07-21 16:55:45 -07:00
|
|
|
case FS_OPCODE_FB_READ_LOGICAL:
|
2013-02-15 19:49:32 -08:00
|
|
|
case FS_OPCODE_UNIFORM_PULL_CONSTANT_LOAD:
|
2016-05-17 23:18:38 -07:00
|
|
|
case FS_OPCODE_VARYING_PULL_CONSTANT_LOAD_LOGICAL:
|
2015-02-20 20:25:04 +02:00
|
|
|
case SHADER_OPCODE_FIND_LIVE_CHANNEL:
|
2022-03-17 00:46:21 -07:00
|
|
|
case SHADER_OPCODE_FIND_LAST_LIVE_CHANNEL:
|
2024-01-05 09:19:38 -08:00
|
|
|
case SHADER_OPCODE_LOAD_LIVE_CHANNELS:
|
2020-01-23 23:01:32 -08:00
|
|
|
case FS_OPCODE_LOAD_LIVE_CHANNELS:
|
2015-02-19 14:52:24 +02:00
|
|
|
case SHADER_OPCODE_BROADCAST:
|
2024-03-06 01:36:10 -08:00
|
|
|
case SHADER_OPCODE_SHUFFLE:
|
|
|
|
|
case SHADER_OPCODE_QUAD_SWIZZLE:
|
|
|
|
|
case SHADER_OPCODE_CLUSTER_BROADCAST:
|
2015-11-07 18:58:34 -08:00
|
|
|
case SHADER_OPCODE_MOV_INDIRECT:
|
2016-04-29 23:35:01 -07:00
|
|
|
case SHADER_OPCODE_TEX_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TXD_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TXF_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TXL_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TXS_LOGICAL:
|
|
|
|
|
case FS_OPCODE_TXB_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TXF_CMS_W_LOGICAL:
|
2024-02-29 23:57:08 -08:00
|
|
|
case SHADER_OPCODE_TXF_CMS_W_GFX12_LOGICAL:
|
2016-04-29 23:35:01 -07:00
|
|
|
case SHADER_OPCODE_TXF_MCS_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_LOD_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TG4_LOGICAL:
|
2023-02-16 20:30:30 -08:00
|
|
|
case SHADER_OPCODE_TG4_BIAS_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TG4_EXPLICIT_LOD_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TG4_IMPLICIT_LOD_LOGICAL:
|
2016-04-29 23:35:01 -07:00
|
|
|
case SHADER_OPCODE_TG4_OFFSET_LOGICAL:
|
2023-03-05 15:27:08 -08:00
|
|
|
case SHADER_OPCODE_TG4_OFFSET_LOD_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_TG4_OFFSET_BIAS_LOGICAL:
|
2024-03-06 01:36:10 -08:00
|
|
|
case SHADER_OPCODE_SAMPLEINFO_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_IMAGE_SIZE_LOGICAL:
|
|
|
|
|
case SHADER_OPCODE_GET_BUFFER_SIZE:
|
2016-05-05 11:40:41 +02:00
|
|
|
case FS_OPCODE_PACK:
|
2024-03-06 01:36:10 -08:00
|
|
|
case FS_OPCODE_PACK_HALF_2x16_SPLIT:
|
2013-07-25 00:30:05 -07:00
|
|
|
case SHADER_OPCODE_RCP:
|
|
|
|
|
case SHADER_OPCODE_RSQ:
|
|
|
|
|
case SHADER_OPCODE_SQRT:
|
|
|
|
|
case SHADER_OPCODE_EXP2:
|
|
|
|
|
case SHADER_OPCODE_LOG2:
|
|
|
|
|
case SHADER_OPCODE_POW:
|
|
|
|
|
case SHADER_OPCODE_INT_QUOTIENT:
|
|
|
|
|
case SHADER_OPCODE_INT_REMAINDER:
|
|
|
|
|
case SHADER_OPCODE_SIN:
|
|
|
|
|
case SHADER_OPCODE_COS:
|
intel/brw: Use CSE for LOAD_SUBGROUP_INVOCATION
Instead of emitting a single one at the top, and making reference to it,
emit the virtual instruction as needed and let CSE do its job.
Since load_subgroup_invocation now can appear not at the start of the
shader, use UNDEF in all cases to ensure that the liveness of the
destination doesn't extend to the first partial write done here (it was
being used only for SIMD > 8 before).
Note this option was considered in the past
6132992cdb858268af0e985727d80e4140be389c but at the time dismissed. The
difference now is that the lowering of the virtual instruction happens
earlier than the scheduling.
The motivation for this change is to allow passes other than the NIR
conversion to use this value. The alternative of storing a `brw_reg` in
the shader (instead of NIR state) gets complicated by passes like
compact_vgrfs, that move VGRFs around (and update the instructions).
This and maybe other passes would have to care about the brw_reg.
Fossil-db numbers, TGL
```
*** Shaders only in 'after' results are ignored:
steam-native/shadow_of_the_tomb_raider/c683ea5067ee157d/fs.32/0, steam-native/shadow_of_the_tomb_raider/f4df450c3cef40b4/fs.32/0, steam-native/shadow_of_the_tomb_raider/94b708fb8e3d9597/fs.32/0, steam-native/shadow_of_the_tomb_raider/19d44c328edabd30/fs.32/0, steam-native/shadow_of_the_tomb_raider/8a7dcbd5a74a19bf/fs.32/0, and 366 more
from 4 apps: steam-dxvk/alan_wake, steam-dxvk/batman_arkham_city_goty, steam-dxvk/batman_arkham_origins, steam-native/shadow_of_the_tomb_raider
*** Shaders only in 'before' results are ignored:
steam-dxvk/octopath_traveler/aaa3d10acb726906/fs.32/0, steam-dxvk/batman_arkham_origins/e6872ae23569c35f/fs.32/0, steam-dxvk/octopath_traveler/fd33a99fa5c271a8/fs.32/0, steam-dxvk/octopath_traveler/9a077cdc16f24520/fs.32/0, steam-dxvk/batman_arkham_city_goty/fac7b438ad52f622/fs.32/0, and 12 more
from 4 apps: steam-dxvk/batman_arkham_city_goty, steam-dxvk/batman_arkham_origins, steam-dxvk/octopath_traveler, steam-native/shadow_of_the_tomb_raider
Totals:
Instrs: 149752381 -> 149751337 (-0.00%); split: -0.00%, +0.00%
Cycle count: 11553609349 -> 11549970294 (-0.03%); split: -0.06%, +0.03%
Spill count: 42763 -> 42764 (+0.00%); split: -0.01%, +0.01%
Fill count: 75650 -> 75651 (+0.00%); split: -0.00%, +0.01%
Max live registers: 31725096 -> 31671792 (-0.17%)
Max dispatch width: 5546008 -> 5551672 (+0.10%); split: +0.11%, -0.00%
Totals from 52574 (8.34% of 630441) affected shaders:
Instrs: 9535159 -> 9534115 (-0.01%); split: -0.03%, +0.02%
Cycle count: 1006627109 -> 1002988054 (-0.36%); split: -0.65%, +0.29%
Spill count: 11588 -> 11589 (+0.01%); split: -0.03%, +0.03%
Fill count: 21057 -> 21058 (+0.00%); split: -0.01%, +0.02%
Max live registers: 1992493 -> 1939189 (-2.68%)
Max dispatch width: 559696 -> 565360 (+1.01%); split: +1.06%, -0.05%
```
and DG2
```
*** Shaders only in 'after' results are ignored:
steam-native/shadow_of_the_tomb_raider/1f95a9d3db21df85/fs.32/0, steam-native/shadow_of_the_tomb_raider/56b87c4a46613a2a/fs.32/0, steam-native/shadow_of_the_tomb_raider/a74b4137f85dbbd3/fs.32/0, steam-native/shadow_of_the_tomb_raider/e07e38d3f48e8402/fs.32/0, steam-native/shadow_of_the_tomb_raider/206336789c48996c/fs.32/0, and 268 more
from 4 apps: steam-dxvk/alan_wake, steam-dxvk/batman_arkham_city_goty, steam-dxvk/batman_arkham_origins, steam-native/shadow_of_the_tomb_raider
*** Shaders only in 'before' results are ignored:
steam-native/shadow_of_the_tomb_raider/0420d7c3a2ea99ec/fs.32/0, steam-native/shadow_of_the_tomb_raider/2ff39f8bf7d24abb/fs.32/0, steam-native/shadow_of_the_tomb_raider/92d7be2824bd9659/fs.32/0, steam-native/shadow_of_the_tomb_raider/f09ca6d2ecf18015/fs.32/0, steam-native/shadow_of_the_tomb_raider/490f8ffd59e52949/fs.32/0, and 205 more
from 3 apps: steam-dxvk/batman_arkham_city_goty, steam-dxvk/batman_arkham_origins, steam-native/shadow_of_the_tomb_raider
Totals:
Instrs: 151597619 -> 151599914 (+0.00%); split: -0.00%, +0.00%
Subgroup size: 7699776 -> 7699784 (+0.00%)
Cycle count: 12738501989 -> 12739841170 (+0.01%); split: -0.01%, +0.02%
Spill count: 61283 -> 61274 (-0.01%)
Fill count: 119886 -> 119849 (-0.03%)
Max live registers: 31810432 -> 31758920 (-0.16%)
Max dispatch width: 5540128 -> 5541136 (+0.02%); split: +0.08%, -0.06%
Totals from 49286 (7.81% of 631231) affected shaders:
Instrs: 8607753 -> 8610048 (+0.03%); split: -0.01%, +0.04%
Subgroup size: 857752 -> 857760 (+0.00%)
Cycle count: 305939495 -> 307278676 (+0.44%); split: -0.28%, +0.72%
Spill count: 6339 -> 6330 (-0.14%)
Fill count: 12571 -> 12534 (-0.29%)
Max live registers: 1788346 -> 1736834 (-2.88%)
Max dispatch width: 510920 -> 511928 (+0.20%); split: +0.85%, -0.66%
```
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/30489>
2024-07-31 22:46:20 -07:00
|
|
|
case SHADER_OPCODE_LOAD_SUBGROUP_INVOCATION:
|
2024-03-04 03:17:03 -08:00
|
|
|
return true;
|
2025-01-02 01:27:07 -08:00
|
|
|
case SHADER_OPCODE_MEMORY_LOAD_LOGICAL:
|
|
|
|
|
return inst->src[MEMORY_LOGICAL_MODE].ud == MEMORY_MODE_CONSTANT;
|
2014-03-30 12:41:55 -07:00
|
|
|
case SHADER_OPCODE_LOAD_PAYLOAD:
|
2024-06-19 10:50:51 -07:00
|
|
|
return !is_coalescing_payload(v->devinfo, v->alloc, inst);
|
2012-05-10 16:10:15 -07:00
|
|
|
default:
|
2015-10-19 23:13:09 -07:00
|
|
|
return inst->is_send_from_grf() && !inst->has_side_effects() &&
|
|
|
|
|
!inst->is_volatile();
|
2012-05-10 16:10:15 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
/**
|
|
|
|
|
* True if the instruction should only be CSE'd within their local block.
|
|
|
|
|
*/
|
|
|
|
|
bool
|
2024-12-07 00:23:07 -08:00
|
|
|
local_only(const brw_inst *inst)
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
{
|
|
|
|
|
switch (inst->opcode) {
|
|
|
|
|
case SHADER_OPCODE_FIND_LIVE_CHANNEL:
|
|
|
|
|
case SHADER_OPCODE_FIND_LAST_LIVE_CHANNEL:
|
|
|
|
|
case SHADER_OPCODE_LOAD_LIVE_CHANNELS:
|
|
|
|
|
case FS_OPCODE_LOAD_LIVE_CHANNELS:
|
|
|
|
|
/* These depend on the current channel enables, so the same opcode
|
|
|
|
|
* in another block will likely return a different value.
|
|
|
|
|
*/
|
|
|
|
|
return true;
|
|
|
|
|
case BRW_OPCODE_MOV:
|
|
|
|
|
/* Global CSE of MOVs is likely not worthwhile. It can increase
|
|
|
|
|
* register pressure by extending the lifetime of simple constants.
|
|
|
|
|
*/
|
|
|
|
|
return true;
|
|
|
|
|
case SHADER_OPCODE_LOAD_PAYLOAD:
|
|
|
|
|
/* This is basically a MOV */
|
|
|
|
|
return inst->sources == 1;
|
|
|
|
|
case BRW_OPCODE_CMP:
|
|
|
|
|
/* Seems to increase spilling a lot without much benefit */
|
|
|
|
|
return true;
|
|
|
|
|
default:
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2013-10-18 16:02:11 -07:00
|
|
|
static bool
|
2024-12-07 00:23:07 -08:00
|
|
|
operands_match(const brw_inst *a, const brw_inst *b, bool *negate)
|
2013-10-18 16:02:11 -07:00
|
|
|
{
|
2024-06-18 23:42:59 -07:00
|
|
|
brw_reg *xs = a->src;
|
|
|
|
|
brw_reg *ys = b->src;
|
2014-03-25 15:28:17 -07:00
|
|
|
|
2014-10-26 10:08:40 -07:00
|
|
|
if (a->opcode == BRW_OPCODE_MAD) {
|
|
|
|
|
return xs[0].equals(ys[0]) &&
|
|
|
|
|
((xs[1].equals(ys[1]) && xs[2].equals(ys[2])) ||
|
|
|
|
|
(xs[2].equals(ys[1]) && xs[1].equals(ys[2])));
|
2024-04-20 17:08:02 -07:00
|
|
|
} else if (a->opcode == BRW_OPCODE_MUL && a->dst.type == BRW_TYPE_F) {
|
2015-01-27 19:18:46 -08:00
|
|
|
bool xs0_negate = xs[0].negate;
|
2015-10-24 14:55:57 -07:00
|
|
|
bool xs1_negate = xs[1].file == IMM ? xs[1].f < 0.0f
|
2015-01-27 19:18:46 -08:00
|
|
|
: xs[1].negate;
|
|
|
|
|
bool ys0_negate = ys[0].negate;
|
2015-10-24 14:55:57 -07:00
|
|
|
bool ys1_negate = ys[1].file == IMM ? ys[1].f < 0.0f
|
2015-01-27 19:18:46 -08:00
|
|
|
: ys[1].negate;
|
2015-10-24 14:55:57 -07:00
|
|
|
float xs1_imm = xs[1].f;
|
|
|
|
|
float ys1_imm = ys[1].f;
|
2015-01-27 19:18:46 -08:00
|
|
|
|
|
|
|
|
xs[0].negate = false;
|
|
|
|
|
xs[1].negate = false;
|
|
|
|
|
ys[0].negate = false;
|
|
|
|
|
ys[1].negate = false;
|
2015-10-24 14:55:57 -07:00
|
|
|
xs[1].f = fabsf(xs[1].f);
|
|
|
|
|
ys[1].f = fabsf(ys[1].f);
|
2015-01-27 19:18:46 -08:00
|
|
|
|
|
|
|
|
bool ret = (xs[0].equals(ys[0]) && xs[1].equals(ys[1])) ||
|
|
|
|
|
(xs[1].equals(ys[0]) && xs[0].equals(ys[1]));
|
|
|
|
|
|
|
|
|
|
xs[0].negate = xs0_negate;
|
|
|
|
|
xs[1].negate = xs[1].file == IMM ? false : xs1_negate;
|
|
|
|
|
ys[0].negate = ys0_negate;
|
|
|
|
|
ys[1].negate = ys[1].file == IMM ? false : ys1_negate;
|
2015-10-24 14:55:57 -07:00
|
|
|
xs[1].f = xs1_imm;
|
|
|
|
|
ys[1].f = ys1_imm;
|
2015-01-27 19:18:46 -08:00
|
|
|
|
2015-04-13 11:29:14 -07:00
|
|
|
*negate = (xs0_negate != xs1_negate) != (ys0_negate != ys1_negate);
|
2016-02-22 10:25:38 -08:00
|
|
|
if (*negate && (a->saturate || b->saturate))
|
|
|
|
|
return false;
|
2015-01-27 19:18:46 -08:00
|
|
|
return ret;
|
2015-03-13 14:34:06 -07:00
|
|
|
} else if (!a->is_commutative()) {
|
2014-03-25 15:28:17 -07:00
|
|
|
bool match = true;
|
|
|
|
|
for (int i = 0; i < a->sources; i++) {
|
|
|
|
|
if (!xs[i].equals(ys[i])) {
|
|
|
|
|
match = false;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return match;
|
intel/brw: Support CSE of ADD3
This one is a bit more complex in that we need to handle 3-source
commutative opcodes. But it's also quite useful:
fossil-db results on Alchemist (A770):
Instrs: 151659750 -> 150164959 (-0.99%); split: -0.99%, +0.01%
Cycles: 12822686329 -> 12574996669 (-1.93%); split: -2.05%, +0.12%
Subgroup size: 7589608 -> 7589592 (-0.00%)
Send messages: 7375047 -> 7375053 (+0.00%); split: -0.00%, +0.00%
Loop count: 46313 -> 46315 (+0.00%); split: -0.01%, +0.01%
Spill count: 110184 -> 54670 (-50.38%); split: -50.79%, +0.41%
Fill count: 213724 -> 104802 (-50.96%); split: -51.43%, +0.47%
Scratch Memory Size: 9406464 -> 3375104 (-64.12%); split: -64.35%, +0.23%
Our older Shadow of the Tomb Raider fossil is particularly helped with
over a 90% reduction in scratch access (spills, fills, and scratch
size). However, benchmarking in the actual game shows no change in
performance. We're thinking the game's shaders have been updated since
our capture.
Ian noted that there was a bug here where we'd accidentally CSE two ADD3
instructions with null destinations and different src[2] that couldn't
be dead code eliminated due to conditional mods. However, this is only
a bug in the new cse_defs pass so we don't need to nominate this for
stable branches.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29848>
2024-06-16 02:45:53 -07:00
|
|
|
} else if (a->sources == 3) {
|
|
|
|
|
return (xs[0].equals(ys[0]) && xs[1].equals(ys[1]) && xs[2].equals(ys[2])) ||
|
|
|
|
|
(xs[0].equals(ys[0]) && xs[1].equals(ys[2]) && xs[2].equals(ys[1])) ||
|
|
|
|
|
(xs[0].equals(ys[1]) && xs[1].equals(ys[0]) && xs[2].equals(ys[2])) ||
|
|
|
|
|
(xs[0].equals(ys[1]) && xs[1].equals(ys[2]) && xs[2].equals(ys[1])) ||
|
|
|
|
|
(xs[0].equals(ys[2]) && xs[1].equals(ys[0]) && xs[2].equals(ys[1])) ||
|
|
|
|
|
(xs[0].equals(ys[2]) && xs[1].equals(ys[1]) && xs[2].equals(ys[0]));
|
2013-10-18 16:02:11 -07:00
|
|
|
} else {
|
|
|
|
|
return (xs[0].equals(ys[0]) && xs[1].equals(ys[1])) ||
|
|
|
|
|
(xs[1].equals(ys[0]) && xs[0].equals(ys[1]));
|
|
|
|
|
}
|
2012-05-10 16:10:15 -07:00
|
|
|
}
|
|
|
|
|
|
i965/fs: Perform CSE on CMP(N) instructions.
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
2013-10-20 11:38:17 -07:00
|
|
|
static bool
|
2024-12-07 00:23:07 -08:00
|
|
|
instructions_match(brw_inst *a, brw_inst *b, bool *negate)
|
i965/fs: Perform CSE on CMP(N) instructions.
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
2013-10-20 11:38:17 -07:00
|
|
|
{
|
|
|
|
|
return a->opcode == b->opcode &&
|
2015-06-04 15:09:10 +03:00
|
|
|
a->exec_size == b->exec_size &&
|
2016-05-20 16:14:13 -07:00
|
|
|
a->group == b->group &&
|
i965/fs: Perform CSE on CMP(N) instructions.
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
2013-10-20 11:38:17 -07:00
|
|
|
a->predicate == b->predicate &&
|
|
|
|
|
a->conditional_mod == b->conditional_mod &&
|
|
|
|
|
a->dst.type == b->dst.type &&
|
2015-06-04 15:09:10 +03:00
|
|
|
a->offset == b->offset &&
|
|
|
|
|
a->mlen == b->mlen &&
|
2018-10-29 15:06:14 -05:00
|
|
|
a->ex_mlen == b->ex_mlen &&
|
|
|
|
|
a->sfid == b->sfid &&
|
|
|
|
|
a->desc == b->desc &&
|
2024-04-10 23:31:26 -07:00
|
|
|
a->ex_desc == b->ex_desc &&
|
2016-09-07 13:38:20 -07:00
|
|
|
a->size_written == b->size_written &&
|
2018-10-29 15:06:14 -05:00
|
|
|
a->check_tdr == b->check_tdr &&
|
2015-06-04 15:09:10 +03:00
|
|
|
a->header_size == b->header_size &&
|
2016-07-06 20:49:58 -07:00
|
|
|
a->target == b->target &&
|
2014-03-25 15:28:17 -07:00
|
|
|
a->sources == b->sources &&
|
2024-04-10 23:31:26 -07:00
|
|
|
a->bits == b->bits &&
|
2015-01-27 19:18:46 -08:00
|
|
|
operands_match(a, b, negate);
|
i965/fs: Perform CSE on CMP(N) instructions.
Optimizes
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
into
cmp.ge.f0(8) null g45<8,8,1>F 0F
(+f0) sel(8) g50<1>F g40<8,8,1>F g10<8,8,1>F
(+f0) sel(8) g51<1>F g41<8,8,1>F g11<8,8,1>F
(+f0) sel(8) g52<1>F g42<8,8,1>F g12<8,8,1>F
(+f0) sel(8) g53<1>F g43<8,8,1>F g13<8,8,1>F
total instructions in shared programs: 1644938 -> 1638181 (-0.41%)
instructions in affected programs: 574955 -> 568198 (-1.18%)
Two more 16-wide programs (in L4D2). Some large (-9%) decreases in
instruction count in some of Valve's Source Engine games. No
regressions.
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Paul Berry <stereotype441@gmail.com>
2013-10-20 11:38:17 -07:00
|
|
|
}
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
/* -------------------------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
#define HASH(hash, data) XXH32(&(data), sizeof(data), hash)
|
|
|
|
|
|
|
|
|
|
uint32_t
|
2024-06-18 23:42:59 -07:00
|
|
|
hash_reg(uint32_t hash, const brw_reg &r)
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
{
|
|
|
|
|
struct {
|
|
|
|
|
uint64_t u64;
|
|
|
|
|
uint32_t u32;
|
|
|
|
|
uint16_t u16a;
|
|
|
|
|
uint16_t u16b;
|
|
|
|
|
} data = {
|
|
|
|
|
.u64 = r.u64, .u32 = r.bits, .u16a = r.offset, .u16b = r.stride
|
|
|
|
|
};
|
|
|
|
|
STATIC_ASSERT(sizeof(data) == 16); /* ensure there's no padding */
|
|
|
|
|
hash = HASH(hash, data);
|
|
|
|
|
return hash;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static uint32_t
|
|
|
|
|
hash_inst(const void *v)
|
|
|
|
|
{
|
2024-12-07 00:23:07 -08:00
|
|
|
const brw_inst *inst = static_cast<const brw_inst *>(v);
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
uint32_t hash = 0;
|
|
|
|
|
|
|
|
|
|
/* Skip dst - that would make nothing ever match */
|
|
|
|
|
|
|
|
|
|
/* Skip ir and annotation - we don't care for equivalency purposes. */
|
|
|
|
|
|
|
|
|
|
const uint8_t u8data[] = {
|
|
|
|
|
inst->sources,
|
|
|
|
|
inst->exec_size,
|
|
|
|
|
inst->group,
|
|
|
|
|
inst->mlen,
|
|
|
|
|
inst->ex_mlen,
|
|
|
|
|
inst->sfid,
|
|
|
|
|
inst->header_size,
|
|
|
|
|
inst->target,
|
|
|
|
|
|
|
|
|
|
inst->conditional_mod,
|
|
|
|
|
inst->predicate,
|
|
|
|
|
};
|
|
|
|
|
const uint32_t u32data[] = {
|
|
|
|
|
inst->desc,
|
|
|
|
|
inst->ex_desc,
|
|
|
|
|
inst->offset,
|
|
|
|
|
inst->size_written,
|
|
|
|
|
inst->opcode,
|
|
|
|
|
inst->bits,
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
hash = HASH(hash, u8data);
|
|
|
|
|
hash = HASH(hash, u32data);
|
|
|
|
|
|
|
|
|
|
/* Skip hashing sched - we shouldn't be CSE'ing after that SWSB */
|
|
|
|
|
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_MAD) {
|
|
|
|
|
/* Commutatively combine the hashes for the multiplicands */
|
|
|
|
|
hash = hash_reg(hash, inst->src[0]);
|
|
|
|
|
uint32_t hash1 = hash_reg(hash, inst->src[1]);
|
|
|
|
|
uint32_t hash2 = hash_reg(hash, inst->src[2]);
|
|
|
|
|
hash = hash1 * hash2;
|
|
|
|
|
} else if (inst->opcode == BRW_OPCODE_MUL &&
|
|
|
|
|
inst->dst.type == BRW_TYPE_F) {
|
|
|
|
|
/* Canonicalize negations on either source (or both) and commutatively
|
|
|
|
|
* combine the hashes for both sources.
|
|
|
|
|
*/
|
2024-06-18 23:42:59 -07:00
|
|
|
brw_reg src[2] = { inst->src[0], inst->src[1] };
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
uint32_t src_hash[2];
|
|
|
|
|
|
|
|
|
|
for (int i = 0; i < 2; i++) {
|
|
|
|
|
src[i].negate = false;
|
|
|
|
|
if (src[i].file == IMM)
|
|
|
|
|
src[i].f = fabs(src[i].f);
|
|
|
|
|
|
|
|
|
|
src_hash[i] = hash_reg(hash, src[i]);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
hash = src_hash[0] * src_hash[1];
|
|
|
|
|
} else if (inst->is_commutative()) {
|
intel/brw: Support CSE of ADD3
This one is a bit more complex in that we need to handle 3-source
commutative opcodes. But it's also quite useful:
fossil-db results on Alchemist (A770):
Instrs: 151659750 -> 150164959 (-0.99%); split: -0.99%, +0.01%
Cycles: 12822686329 -> 12574996669 (-1.93%); split: -2.05%, +0.12%
Subgroup size: 7589608 -> 7589592 (-0.00%)
Send messages: 7375047 -> 7375053 (+0.00%); split: -0.00%, +0.00%
Loop count: 46313 -> 46315 (+0.00%); split: -0.01%, +0.01%
Spill count: 110184 -> 54670 (-50.38%); split: -50.79%, +0.41%
Fill count: 213724 -> 104802 (-50.96%); split: -51.43%, +0.47%
Scratch Memory Size: 9406464 -> 3375104 (-64.12%); split: -64.35%, +0.23%
Our older Shadow of the Tomb Raider fossil is particularly helped with
over a 90% reduction in scratch access (spills, fills, and scratch
size). However, benchmarking in the actual game shows no change in
performance. We're thinking the game's shaders have been updated since
our capture.
Ian noted that there was a bug here where we'd accidentally CSE two ADD3
instructions with null destinations and different src[2] that couldn't
be dead code eliminated due to conditional mods. However, this is only
a bug in the new cse_defs pass so we don't need to nominate this for
stable branches.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29848>
2024-06-16 02:45:53 -07:00
|
|
|
/* Commutatively combine the sources */
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
uint32_t hash0 = hash_reg(hash, inst->src[0]);
|
|
|
|
|
uint32_t hash1 = hash_reg(hash, inst->src[1]);
|
intel/brw: Support CSE of ADD3
This one is a bit more complex in that we need to handle 3-source
commutative opcodes. But it's also quite useful:
fossil-db results on Alchemist (A770):
Instrs: 151659750 -> 150164959 (-0.99%); split: -0.99%, +0.01%
Cycles: 12822686329 -> 12574996669 (-1.93%); split: -2.05%, +0.12%
Subgroup size: 7589608 -> 7589592 (-0.00%)
Send messages: 7375047 -> 7375053 (+0.00%); split: -0.00%, +0.00%
Loop count: 46313 -> 46315 (+0.00%); split: -0.01%, +0.01%
Spill count: 110184 -> 54670 (-50.38%); split: -50.79%, +0.41%
Fill count: 213724 -> 104802 (-50.96%); split: -51.43%, +0.47%
Scratch Memory Size: 9406464 -> 3375104 (-64.12%); split: -64.35%, +0.23%
Our older Shadow of the Tomb Raider fossil is particularly helped with
over a 90% reduction in scratch access (spills, fills, and scratch
size). However, benchmarking in the actual game shows no change in
performance. We're thinking the game's shaders have been updated since
our capture.
Ian noted that there was a bug here where we'd accidentally CSE two ADD3
instructions with null destinations and different src[2] that couldn't
be dead code eliminated due to conditional mods. However, this is only
a bug in the new cse_defs pass so we don't need to nominate this for
stable branches.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29848>
2024-06-16 02:45:53 -07:00
|
|
|
uint32_t hash2 = inst->sources > 2 ? hash_reg(hash, inst->src[2]) : 1;
|
|
|
|
|
hash = hash0 * hash1 * hash2;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
} else {
|
|
|
|
|
/* Just hash all the sources */
|
|
|
|
|
for (int i = 0; i < inst->sources; i++)
|
|
|
|
|
hash = hash_reg(hash, inst->src[i]);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return hash;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* -------------------------------------------------------------------- */
|
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
cmp_func(const void *data1, const void *data2)
|
|
|
|
|
{
|
|
|
|
|
bool negate;
|
2024-12-07 00:23:07 -08:00
|
|
|
return instructions_match((brw_inst *) data1, (brw_inst *) data2, &negate);
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
}
|
|
|
|
|
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
static bool
|
2024-12-06 21:20:58 -08:00
|
|
|
remap_sources(fs_visitor &s, const brw_def_analysis &defs,
|
2024-12-07 00:23:07 -08:00
|
|
|
brw_inst *inst, struct remap_entry *remap_table)
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
{
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
bool progress = false;
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
for (int i = 0; i < inst->sources; i++) {
|
|
|
|
|
if (inst->src[i].file == VGRF &&
|
|
|
|
|
inst->src[i].nr < defs.count() &&
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
remap_table[inst->src[i].nr].inst != NULL) {
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
const unsigned old_nr = inst->src[i].nr;
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
const unsigned new_nr = remap_table[old_nr].nr;
|
|
|
|
|
const bool need_negate = remap_table[old_nr].negate;
|
|
|
|
|
|
|
|
|
|
if (need_negate &&
|
|
|
|
|
(remap_table[old_nr].type != inst->src[i].type ||
|
|
|
|
|
!inst->can_do_source_mods(s.devinfo))) {
|
|
|
|
|
remap_table[old_nr].still_used = true;
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
inst->src[i].nr = new_nr;
|
|
|
|
|
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
if (!inst->src[i].abs)
|
|
|
|
|
inst->src[i].negate ^= need_negate;
|
|
|
|
|
|
|
|
|
|
progress = true;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
}
|
|
|
|
|
}
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
|
|
|
|
|
return progress;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool
|
2024-12-06 11:37:57 -08:00
|
|
|
brw_opt_cse_defs(fs_visitor &s)
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
{
|
|
|
|
|
const intel_device_info *devinfo = s.devinfo;
|
2024-12-06 21:20:58 -08:00
|
|
|
const brw_idom_tree &idom = s.idom_analysis.require();
|
|
|
|
|
const brw_def_analysis &defs = s.def_analysis.require();
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
bool progress = false;
|
|
|
|
|
bool need_remaps = false;
|
|
|
|
|
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
struct remap_entry *remap_table = new remap_entry[defs.count()];
|
|
|
|
|
memset(remap_table, 0, defs.count() * sizeof(struct remap_entry));
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
struct set *set = _mesa_set_create(NULL, NULL, cmp_func);
|
|
|
|
|
|
|
|
|
|
foreach_block(block, s.cfg) {
|
2024-12-07 00:23:07 -08:00
|
|
|
brw_inst *last_flag_write = NULL;
|
|
|
|
|
brw_inst *last = NULL;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
|
2024-12-07 00:23:07 -08:00
|
|
|
foreach_inst_in_block_safe(brw_inst, inst, block) {
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
if (need_remaps)
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
progress |= remap_sources(s, defs, inst, remap_table);
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
|
|
|
|
|
/* Updating last_flag_written should be at the bottom of the loop,
|
|
|
|
|
* but doing it this way lets us use "continue" more easily.
|
|
|
|
|
*/
|
|
|
|
|
if (last && last->flags_written(devinfo))
|
|
|
|
|
last_flag_write = last;
|
|
|
|
|
last = inst;
|
|
|
|
|
|
|
|
|
|
if (inst->dst.is_null()) {
|
|
|
|
|
bool ignored;
|
|
|
|
|
if (last_flag_write && !inst->writes_accumulator &&
|
|
|
|
|
instructions_match(last_flag_write, inst, &ignored)) {
|
|
|
|
|
/* This instruction has no destination but has a flag write
|
|
|
|
|
* which is redundant with the previous flag write in our
|
|
|
|
|
* basic block. So we can simply remove it.
|
|
|
|
|
*/
|
|
|
|
|
inst->remove(block, true);
|
|
|
|
|
last = NULL;
|
|
|
|
|
progress = true;
|
|
|
|
|
}
|
|
|
|
|
} else if (is_expression(&s, inst) && defs.get(inst->dst)) {
|
|
|
|
|
assert(!inst->writes_accumulator);
|
|
|
|
|
assert(!inst->reads_accumulator_implicitly());
|
|
|
|
|
|
|
|
|
|
uint32_t hash = hash_inst(inst);
|
|
|
|
|
if (inst->flags_read(devinfo)) {
|
|
|
|
|
hash = last_flag_write ? HASH(hash, last_flag_write)
|
|
|
|
|
: HASH(hash, block);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
struct set_entry *e =
|
|
|
|
|
_mesa_set_search_or_add_pre_hashed(set, hash, inst, NULL);
|
|
|
|
|
if (!e) goto out; /* out of memory error */
|
2024-12-07 00:23:07 -08:00
|
|
|
brw_inst *match = (brw_inst *) e->key;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
|
|
|
|
|
/* If there was no match, move on */
|
|
|
|
|
if (match == inst)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
bblock_t *def_block = defs.get_block(match->dst);
|
|
|
|
|
if (block != def_block && (local_only(inst) ||
|
|
|
|
|
!idom.dominates(def_block, block))) {
|
|
|
|
|
/* If `match` doesn't dominate `inst` then remove it from
|
|
|
|
|
* the set and add `inst` instead so future lookups see that.
|
|
|
|
|
*/
|
|
|
|
|
e->key = inst;
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* We can replace inst with match or negate(match). */
|
|
|
|
|
bool negate = false;
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_MUL &&
|
|
|
|
|
inst->dst.type == BRW_TYPE_F) {
|
|
|
|
|
/* Determine whether inst is actually negate(match) */
|
|
|
|
|
bool ops_must_match = operands_match(inst, match, &negate);
|
|
|
|
|
assert(ops_must_match);
|
|
|
|
|
}
|
|
|
|
|
|
brw/cse: Don't eliminate instructions that write flags
With other changes in my tree, I observed this code from
dEQP-VK.subgroups.vote.compute.subgroupallequal_float have the second
cmp.z removed.
undef(8) %69:UD
cmp.z.f0.0(8) %69:F, %37:F, %57+0.0<0>:F
mov(1) v58+0.0:D, 0d NoMask group0
(+f0.0) mov(1) v58+0.0:D, -1d NoMask group0
cmp.nz.f0.0(8) null:D, v58+0.0<0>:D, 0d
...
undef(8) %72:UD
cmp.z.f0.0(8) %72:F, %37:F, %57+0.0<0>:F
mov(1) v63+0.0:D, 0d NoMask group0
(+f0.0) mov(1) v63+0.0:D, -1d NoMask group0
This was also fixed by running dead-code elimination before CSE. That
seems more like avoiding the problem than fixing it, though.
I believe this affects shader-db results because leaving the second
CMP in the shader can give more opportunities for cmod propagation.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Fixes: 234c45c929e ("intel/brw: Write a new global CSE pass that works on defs")
shader-db:
All Intel platforms had similar results. (Lunar Lake shown)
total cycles in shared programs: 922097690 -> 922260862 (0.02%)
cycles in affected programs: 3178926 -> 3342098 (5.13%)
helped: 130
HURT: 88
helped stats (abs) min: 2 max: 2194 x̄: 296.71 x̃: 16
helped stats (rel) min: <.01% max: 16.56% x̄: 1.86% x̃: 0.18%
HURT stats (abs) min: 4 max: 11992 x̄: 2292.55 x̃: 47
HURT stats (rel) min: 0.04% max: 57.32% x̄: 11.82% x̃: 0.61%
95% mean confidence interval for cycles value: 320.36 1176.63
95% mean confidence interval for cycles %-change: 1.59% 5.73%
Cycles are HURT.
LOST: 2
GAINED: 1
fossil-db:
Lunar Lake, Meteor Lake, Tiger Lake had similar results. (Lunar Lake shown)
Totals:
Instrs: 142022960 -> 142022928 (-0.00%); split: -0.00%, +0.00%
Cycle count: 21995242782 -> 21995384040 (+0.00%); split: -0.00%, +0.00%
Max live registers: 48013385 -> 48013343 (-0.00%)
Totals from 507 (0.09% of 551441) affected shaders:
Instrs: 886191 -> 886159 (-0.00%); split: -0.01%, +0.01%
Cycle count: 69302492 -> 69443750 (+0.20%); split: -0.66%, +0.86%
Max live registers: 94413 -> 94371 (-0.04%)
DG2
Totals:
Instrs: 152856370 -> 152856093 (-0.00%); split: -0.00%, +0.00%
Cycle count: 17237159885 -> 17236804052 (-0.00%); split: -0.00%, +0.00%
Fill count: 150673 -> 150631 (-0.03%)
Max live registers: 31871520 -> 31871476 (-0.00%)
Totals from 506 (0.08% of 633197) affected shaders:
Instrs: 831795 -> 831518 (-0.03%); split: -0.04%, +0.01%
Cycle count: 55578509 -> 55222676 (-0.64%); split: -1.38%, +0.74%
Fill count: 2779 -> 2737 (-1.51%)
Max live registers: 51383 -> 51339 (-0.09%)
Ice Lake and Skylake had similar results. (Ice Lake shown)
Totals:
Instrs: 152017826 -> 152017793 (-0.00%); split: -0.00%, +0.00%
Cycle count: 15180773451 -> 15180761166 (-0.00%); split: -0.00%, +0.00%
Fill count: 106610 -> 106614 (+0.00%)
Max live registers: 32195006 -> 32194966 (-0.00%)
Totals from 411 (0.06% of 637268) affected shaders:
Instrs: 705935 -> 705902 (-0.00%); split: -0.01%, +0.01%
Cycle count: 47830019 -> 47817734 (-0.03%); split: -0.05%, +0.02%
Fill count: 2865 -> 2869 (+0.14%)
Max live registers: 42883 -> 42843 (-0.09%)
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32041>
2024-11-06 12:07:15 -08:00
|
|
|
/* Some later instruction could depend on the flags written by
|
|
|
|
|
* this instruction. It can only be removed if the previous
|
|
|
|
|
* instruction that write the flags is identical.
|
|
|
|
|
*/
|
|
|
|
|
if (inst->flags_written(devinfo)) {
|
|
|
|
|
bool ignored;
|
|
|
|
|
|
|
|
|
|
if (last_flag_write == NULL ||
|
|
|
|
|
!instructions_match(last_flag_write, inst, &ignored)) {
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
need_remaps = true;
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
remap_table[inst->dst.nr].inst = inst;
|
|
|
|
|
remap_table[inst->dst.nr].block = block;
|
|
|
|
|
remap_table[inst->dst.nr].type = match->dst.type;
|
|
|
|
|
remap_table[inst->dst.nr].nr = match->dst.nr;
|
|
|
|
|
remap_table[inst->dst.nr].negate = negate;
|
|
|
|
|
remap_table[inst->dst.nr].still_used = false;
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
brw: fix CSE with negation
The pass is currently turning this :
mul(16) %17:F, %1:F, 0.5f
mul(16) %19:F, %1:F, -0.5f
(+f0.0) sel(16) %27:UD, %19:UD, %17:UD
into this :
{ 12} mul(16) %17:F, %1:F, 0.5f
{ 14} (+f0.0) sel(16) %27:UD, -%17:F, %17:UD
The type change in the SEL instruction incurs a type conversion that
produces invalid values.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Fixes: 234c45c929 ("intel/brw: Write a new global CSE pass that works on defs")
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/12477
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33070>
2025-01-16 17:50:15 +02:00
|
|
|
/* Remove instruction now unused */
|
|
|
|
|
for (unsigned i = 0; i < defs.count(); i++) {
|
|
|
|
|
if (!remap_table[i].inst)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (!remap_table[i].still_used) {
|
|
|
|
|
remap_table[i].inst->remove(remap_table[i].block, true);
|
|
|
|
|
progress = true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
out:
|
|
|
|
|
delete [] remap_table;
|
|
|
|
|
_mesa_set_destroy(set, NULL);
|
|
|
|
|
|
|
|
|
|
if (progress) {
|
|
|
|
|
s.cfg->adjust_block_ips();
|
2024-12-06 20:52:05 -08:00
|
|
|
s.invalidate_analysis(BRW_DEPENDENCY_INSTRUCTION_DATA_FLOW |
|
|
|
|
|
BRW_DEPENDENCY_INSTRUCTION_DETAIL);
|
intel/brw: Write a new global CSE pass that works on defs
This has a number of advantages compared to the pass I wrote years ago:
- It can easily perform either Global CSE or block-local CSE, without
needing to roll any dataflow analysis, thanks to SSA def analysis.
This global CSE is able to detect and coalesce memory loads across
blocks. Although it may increase spilling a little, the reduction
in memory loads seems to more than compensate.
- Because SSA guarantees that values are never written more than once,
the new CSE pass can directly reuse an existing value. The old pass
emitted copies at the point where it discovered a value because it
had no idea whether it'd be mutated later. This led it to generate
a ton of trash for copy propagation to clean up later, and also a
nasty fragility where CSE, register coalescing, and copy propagation
could all fight one another by generating and cleaning up copies,
leading to infinite optimization loops unless we were really careful.
Generating less trash improves our CPU efficiency.
- It uses hash tables like nir_instr_set and nir_opt_cse, instead of
linearly walking lists and comparing each element. This is much more
CPU efficient.
- It doesn't use liveness analysis, which is one of the most expensive
analysis passes that we have. Def analysis is cheaper.
In addition to CSE'ing SSA values, we continue to handle flag writes,
as this is a huge source of CSE'able values. These remain block local.
However, we can simply track the last flag write, rather than creating
entire sets of instruction entries like the old pass. Much simpler.
The only real downside to this pass is that, because the backend is
currently only partially SSA, it has limited visibility and isn't able
to see all values. However, the results appear to be good enough that
the new pass can effectively replace the old pass in almost all cases.
Reviewed-by: Caio Oliveira <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/28666>
2024-02-29 01:55:35 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return progress;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#undef HASH
|