This isn't a great solution for bit-sizes but we don't have a
particularly convenient way to get a bit size from the system value enum
and this keeps the lowering pass from changing it.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Instead of doing our own constant folding, we just emit instructions and
let constant folding happen. This is substantially simpler and lets us
use the nir_imm_bool helper instead of dealing with the const_value's
ourselves.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This requires that we rework the interface a bit to use nir_builder but
that's a nice little modernization anyway.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Missed one while converting to the nir_src_as_* helpers.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Use nir_src_comp_as_uint() to read the proper second component, as
nir_src_as_uint() returns the first one.
v2: Use nir_src_comp_as_uint() [Jason]
Fixes: 16870de8a0 ("nir: Use nir_src_is_const and nir_src_as_* in core
code")
Signed-off-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108532
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The linking opts shouldn't try removing or compacting XFB varyings
in the consumer. To avoid this we copy the always_active_io flag
from the producer.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The conon_bit_class and canon_var_class variables got switched.
Fixes: 932c650e0b "nir/algebraic: Loosen a restriction on variables"
Reported-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Previously, we would fail if a variable had an assigned but unknown bit
size X and we tried to assign it an actual bit size. However, this is
ok because, at the time we do the search, the variable does have an
actual bit size and it will match X because of the NIR rules.
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
We rename some local variables in validate() to be more readable and
plumb the var through to get/set_var_bit_class instead of the var index.
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
This not only makes them safe for more bit sizes but it also fixes a bug
in is_zero_to_one where it would return true for constant NaN.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Extend the pass to propagate the copies information along the control
flow graph. It performs two walks, first it collects the vars
that were written inside each node. Then it walks applying the copy
propagation using a list of copies previously available. At each node
the list is invalidated according to results from the first walk.
This approach is simpler than a full data-flow analysis, but covers
various cases. If derefs are used for operating on more memory
resources (e.g. SSBOs), the difference from a regular pass is expected
to be more visible -- as the SSA copy propagation pass won't apply to
those.
A full data-flow analysis would handle more scenarios: conditional
breaks in the control flow and merge equivalent effects from multiple
branches (e.g. using a phi node to merge the source for writes to the
same deref). However, as previous commentary in the code stated, its
complexity 'rapidly get out of hand'. The current patch is a good
intermediate step towards more complex analysis.
The 'copies' linked list was modified to use util_dynarray to make it
more convenient to clone it (to handle ifs/loops).
Annotated shader-db results for Skylake:
total instructions in shared programs: 15105796 -> 15105451 (<.01%)
instructions in affected programs: 152293 -> 151948 (-0.23%)
helped: 96
HURT: 17
All the HURTs and many HELPs are one instruction. Looking
at pass by pass outputs, the copy prop kicks in removing a
bunch of loads correctly, which ends up altering what other
other optimizations kick. In those cases the copies would be
propagated after lowering to SSA.
In few HELPs we are actually helping doing more than was
possible previously, e.g. consolidating load_uniforms from
different blocks. Most of those are from
shaders/dolphin/ubershaders/.
total cycles in shared programs: 566048861 -> 565954876 (-0.02%)
cycles in affected programs: 151461830 -> 151367845 (-0.06%)
helped: 2933
HURT: 2950
A lot of noise on both sides.
total loops in shared programs: 4603 -> 4603 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total spills in shared programs: 11085 -> 11073 (-0.11%)
spills in affected programs: 23 -> 11 (-52.17%)
helped: 1
HURT: 0
The shaders/dolphin/ubershaders/12.shader_test was able to
pull a couple of loads from inside if statements and reuse
them.
total fills in shared programs: 23143 -> 23089 (-0.23%)
fills in affected programs: 2718 -> 2664 (-1.99%)
helped: 27
HURT: 0
All from shaders/dolphin/ubershaders/.
LOST: 0
GAINED: 0
The other generations follow the same overall shape. The spills and
fills HURTs are all from the same game.
shader-db results for Broadwell.
total instructions in shared programs: 15402037 -> 15401841 (<.01%)
instructions in affected programs: 144386 -> 144190 (-0.14%)
helped: 86
HURT: 9
total cycles in shared programs: 600912755 -> 600902486 (<.01%)
cycles in affected programs: 185662820 -> 185652551 (<.01%)
helped: 2598
HURT: 3053
total loops in shared programs: 4579 -> 4579 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total spills in shared programs: 80929 -> 80924 (<.01%)
spills in affected programs: 720 -> 715 (-0.69%)
helped: 1
HURT: 5
total fills in shared programs: 93057 -> 93013 (-0.05%)
fills in affected programs: 3398 -> 3354 (-1.29%)
helped: 27
HURT: 5
LOST: 0
GAINED: 2
shader-db results for Haswell:
total instructions in shared programs: 9231975 -> 9230357 (-0.02%)
instructions in affected programs: 44992 -> 43374 (-3.60%)
helped: 27
HURT: 69
total cycles in shared programs: 87760587 -> 87727502 (-0.04%)
cycles in affected programs: 7720673 -> 7687588 (-0.43%)
helped: 1609
HURT: 1416
total loops in shared programs: 1830 -> 1830 (0.00%)
loops in affected programs: 0 -> 0
helped: 0
HURT: 0
total spills in shared programs: 1988 -> 1692 (-14.89%)
spills in affected programs: 296 -> 0
helped: 1
HURT: 0
total fills in shared programs: 2103 -> 1668 (-20.68%)
fills in affected programs: 438 -> 3 (-99.32%)
helped: 4
HURT: 0
LOST: 0
GAINED: 1
v2: Remove the DISABLE prefix from tests we now pass.
v3: Add comments about missing write_mask handling. (Caio)
Add unreachable when switching on cf_node type. (Jason)
Properly merge the component information in written map
instead of replacing. (Jason)
Explain how removal from written arrays works. (Jason)
Use mode directly from deref instead of getting the var. (Jason)
v4: Register the local written mode for calls. (Jason)
Prefer cf_node instead of node. (Jason)
Clarify that remove inside iteration only works in backward
iterations. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Calls are not used yet (functions are inlined), but since new code is
already taking them into account, do it here too. The convention here
and in other places is that no writable memory is assumed to remain
unchanged, as well as global variables.
Also, explicitly state the modes affected (instead of using the
reverse logic) in one of the apply_for_barrier_modes calls.
Suggested by Jason.
v2: Consider local vars used by a call to be conservative, SPIR-V has
such cases. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Also tests for removal of redundant loads, that we currently handle as
part of the copy propagation.
Note some tests involve multiple blocks and are currently DISABLED
because they (expectedly) fail.
v2: Add missing DISABLED prefix to "multi block" tests. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Instead of doing this as part of the existing copy_prop_vars pass.
Separation makes easier to expand the scope of both passes to be more
than per-block. For copy propagation, the information about valid
copies comes from previous instructions; while the dead write removal
depends on information from later instructions ("have any instruction
used this deref before overwrite it?").
Also change the tests to use this pass (instead of copy prop vars).
Note that the disabled tests continue to fail, since the standalone
pass is still per-block.
v2: Remove entries from dynarray instead of marking items as
deleted. Use foreach_reverse. (Caio)
(all from Jason)
Do not cache nir_deref_path. Not worthy for this patch.
Clear unused writes when hitting a call instruction.
Clean up enumeration of modes for barriers.
Move metadata calls to the inner function.
v3: For copies, use the vector length to calculate the mask.
(all from Jason)
Use nir_component_mask_t when applicable.
Rename functions for clarity.
Consider local vars used by a call to be conservative (SPIR-V has
such cases).
Comment and assert the assumption that stores and copies are
always to a deref that ends with a vector or scalar.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Note at the moment the pass called is nir_opt_copy_prop_vars, because
dead write elimination is implemented there.
Also added tests that involve identifying dead writes in multiple
blocks (e.g. the overwrite happens in another block). Those currently
fail as expected, so are marked to be skipped.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Add basic helpers for doing tests on the vars related optimization
passes. The main goal is to lower the barrier to create tests during
development and debugging of the passes. Full coverage is not a
requirement.
v2: Make find_next_intrinsic() skip blocks before 'after'. (Jason)
Move nir_imm_ivec2() to nir_builder.h. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For gallium drivers where you want to do some linking at variant compile
time, you don't have the other producer/consumer shader on hand to modify.
By exposing the inner function, the driver can have the used varyings in
the compiled shader cache key and still do linking.
This is also useful for V3D, where the binning shader wants to only output
position and TF varyings. We've been removing those after nir_lower_io,
but this will be less driver-specific code and let more of the shader get
DCEed early in NIR.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
This is needed for nir_gather_info to actually count the new textures,
since it operates solely on variables.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Dave Airlie <airlied@redhat.com>
v2: - change how the access qualifiers are accumulated
v3: - duplicate members in struct_member_decoration_cb()
- handle access qualifiers on variables
- remove access qualifiers handling in _vtn_variable_load_store()
- fix setting access qualifiers on type->array_element
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net
The b2f and b2i conversions always produce zero or one which are both
representable in every type and size. Since b2i and b2f support all bit
sizes, we can just get rid of the conversion opcode.
total instructions in shared programs: 15089335 -> 15084368 (-0.03%)
instructions in affected programs: 212564 -> 207597 (-2.34%)
helped: 896
HURT: 0
total cycles in shared programs: 369831123 -> 369826267 (<.01%)
cycles in affected programs: 2008647 -> 2003791 (-0.24%)
helped: 693
HURT: 216
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
These allows us to not support fsign.sat in the Intel compiler backend,
and that will simplify some later changes.
No shader-db changes on any Intel platform.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
The ssa_for_alu_src helper will correctly handle swizzles and other
source modifiers for you. The expansions for unpack_half_2x16,
pack_uvec2_to_uint, and pack_uvec4_to_uint were all broken with regards
to swizzles. The brokenness of unpack_half_2x16 was causing rendering
errors in Rise of the Tomb Raider on Intel ever since c11833ab24
which added an extra copy propagation to the optimization pipeline and
caused us to start seeing swizzles where we hadn't seen any before.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107926
Fixes: 9ce901058f "nir: Add lowering of nir_op_unpack_half_2x16."
Fixes: 9b8786eba9 "nir: Add lowering support for packing opcodes."
Tested-by: Alex Smith <asmith@feralinteractive.com>
Tested-by: Józef Kucia <joseph.kucia@gmail.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
We already call nir_rematerialize_derefs_in_use_blocks_impl prior to
calling nir_lower_ssa_defs_to_regs_block so the assertion that all deref
uses in the block should hold. This fixes the following CTS test when
SPIR-V optimization recipe 1:
dEQP-VK.glsl.struct.local.loop_nested_struct_array_vertex
Fixes: 606eb56ab9 "intel/nir: Only lower load/store derefs"
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
If the block in which the jump is inserted is the predecessor of a phi
then we need to remove phi sources otherwise the phi may end up with
things improperly connected. This fixes the following CTS test when
dEQP is run with SPIR-V optimization recipe 1:
dEQP-VK.glsl.functions.control_flow.return_in_nested_loop_vertex
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
CC: Jason Ekstrand <jason@jlekstrand.net>
Fixes: 82799a5d1b8 ("nir: Add a small pass to rematerialize derefs
per-block")
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
The lcssa and phis_to_regs passes are used by various NIR optimizations
that modify the CFG. Putting a couple of asserts will help ensure that
we don't accidentally put derefs in phis as part of an optimization
pass.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
When we're about to re-arrange a bunch of blocks, it's a good idea to
make sure that we don't have deref uses crossing block boundaries.
Otherwise we may end up with a deref going through a phi and that would
be bad.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Cc: "18.2" <mesa-stable@lists.freedesktop.org>
This pass re-materializes deref instructions on a per-block basis to
ensure that every use of a deref occurs in the same block as the
instruction which uses it.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Cc: "18.2" <mesa-stable@lists.freedesktop.org>
In GLSL IR we cheat with switch statements and simply convert them
into loops with a single iteration. This allowed us to make use of
the existing jump instruction handling provided by the loop handing
code, it also allows dead code to be cleaned up once we have
wrapped the code in a loop.
However using loops in this way created previously unrollable loops
which limits further optimisations. Here we provide a way to unroll
loops that end in a break and have multiple other exits.
All shader-db changes are from the dolphin uber shaders. There is a
small amount of HURT shaders but in general the improvements far
exceed the HURT.
shader-db results IVB:
total instructions in shared programs: 10018187 -> 10016468 (-0.02%)
instructions in affected programs: 104080 -> 102361 (-1.65%)
helped: 36
HURT: 15
total cycles in shared programs: 220065064 -> 154529655 (-29.78%)
cycles in affected programs: 126063017 -> 60527608 (-51.99%)
helped: 51
HURT: 0
total loops in shared programs: 2515 -> 2308 (-8.23%)
loops in affected programs: 903 -> 696 (-22.92%)
helped: 51
HURT: 0
total spills in shared programs: 4370 -> 4124 (-5.63%)
spills in affected programs: 1397 -> 1151 (-17.61%)
helped: 9
HURT: 12
total fills in shared programs: 4581 -> 4419 (-3.54%)
fills in affected programs: 2201 -> 2039 (-7.36%)
helped: 9
HURT: 15
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>