Drops one instruction from fs-sign-int.shader_test. No change in
shader-db due to it having 0 instances of sign(genIType). This may hurt
isign64 if algebraic runs before int64 lowering, but I wasn't sure how to
mark the algebraic opt as "every bit size but 64".
v2: Update commit message about shader-db.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> (v1)
Patch propagates given scale_factors to lowering options.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
It's more clear and means we don't have to update the array every time
we add an optional texture instruction argument
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
The sampler will be ignored since the underlying 'ld_mcs' operation
won't use it, so just fill the field with 0 instead of the texture to
make it clearer that's the case.
This will also avoid is_high_sampler() to kick in unnecessarily, in
case we are using the operation for a texture with index >= 16.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
For some reason, this warning only occurs for me in release builds.
In file included from src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c:25:0:
src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c: In function ‘brw_nir_lower_mem_access_bit_sizes’:
src/compiler/nir/nir_builder.h:501:26: warning: ‘src_swiz[2]’ may be used uninitialized in this function [-Wmaybe-uninitialized]
alu_src.swizzle[i] = swiz[i];
~~~~~~~~~~~~~~~~~~~^~~~~~~~~
src/intel/compiler/brw_nir_lower_mem_access_bit_sizes.c:225:16: note: ‘src_swiz[2]’ was declared here
unsigned src_swiz[4];
^~~~~~~~
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Previously, we only applied the fix to shaders with a dispatch mode of
SIMD8 but the code it relies on for SIMD16 mode only applies to SIMD16
instructions. If you have a SIMD8 instruction in a SIMD16 shader,
neither would trigger and the restriction could still be hit.
Fixes: 232ed89802 "i965/fs: Register allocator shoudn't use grf127..."
Reviewed-by: Jose Maria Casanova Crespo <jmcasanova@igalia.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
By just assigning dst.type to src[i].type, we ensure that the offset at
the end of the loop actually offsets it by the right number of
registers. Otherwise, we'll get into a case where we copy with a Q type
and then offset with a D type and things get out of sync.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Previously, we tried to combine all cases where the instruction being
CSE'd writes to more than one MOV worth of registers into one case with
a bit of special casing for LOAD_PAYLOAD. This commit splits things so
that LOAD_PAYLOAD is entirely it's own case. This makes tweaking the
LOAD_PAYLOAD case simpler in the next commit.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Missing check for shader stage in the fs_visitor would corrupt the
cs_prog_data.push information and trigger crashes / corruption later
when uploading the CS state.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Surface reads don't need them because they just have the one address
payload. With surface writes, on the other hand, we can put the address
and the data in the different halves and avoid building the payload all
together.
The decrease in register pressure and added freedom in register
allocation resulting from this change reduces spilling enough to improve
the performance of one customer benchmark by about 2x.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
We're about to add some more if cases so let's have the giant re-indent
in it's own patch to make review easier.
Acked-by: Iago Toral Quiroga <itoral@igalia.com>
These have clearly never seen any use.... On gen8, the bottom 4 bits are
missing so we need to shift them off before we call set_bits and shift
again when we get the bits. Found by inspection.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Instead of fetching the information out of the instruction directly,
fetch the descriptor and then pluck the information out of the
descriptor. The current scheme works ok for SEND but with SENDS, it all
falls to pieces because the descriptor is completely shuffled around.
This commit doesn't actually convert everything. One notable exception
is URB messages which don't even use descriptors in emit_urb_WRITE yet.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
We want to be able to extract data from descriptors as well as unify a
bit of the descriptor construction.
One of the unifications we do is to unify the read/write and dataport
descriptors. On gen4-5, read/write are substantially different and the
read descriptors change between gen4 and gen4.x. On gen6, they unified
layouts between read, write, and dataport. Then, on gen8, they added
one bit to the message type field but left it reserved MBZ for
read/write messages. This commit chooses to treat that as if they
expanded the field everywhere and just didn't have enough enum values
for read/write to bother with the extra bit.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
This commit pulls the surface descriptor helpers out into brw_eu.h and
makes them no longer depend on the codegen infrastructure. This should
allow us to use them directly from the IR code instead of the generator.
This change is unfortunately less mechanical than perhaps one would like
but it should be fairly straightforward.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Instead of magically falling back to SIMD8 for atomics and typed
messages on Ivy Bridge, explicitly figure out the exec size and pass
that into brw_surface_payload_size.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Like all the other sends, it's just mlen * REG_SIZE.
Fixes: 3cbc02e469 "intel: Use TXS for image_size when we have..."
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
If you pass a bool in as the value to set, the C standard says that it
gets converted to an int prior to shifting. If you try to set a bool to
bit 31, this lands you in undefined behavior. It's better just to add
the explicit cast and let the compiler delete it for us.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
There are piles of fields that it doesn't check so using it is a lie.
The only reason why it's not causing problem is because it has exactly
one user which only uses it for MOV instructions (which aren't very
interesting) and only on Sandy Bridge and earlier hardware. Just get
rid of it and inline it in the one place that it's actually used.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Acked-by: Jose Maria Casanova Crespo <jmcasanova@igalia.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
emit_uniformize() emits SHADER_OPCODE_FIND_LIVE_CHANNEL with its
flag_subreg set, so that the IR knows which flag is accessed. However
the flag is only used on Gen7 in Align1 mode.
To avoid setting unnecessary bits in the instruction words, get the
information we need and reset the default flag register. This allows
round-tripping through the assembler/disassembler.
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The docs are fairly incomplete and inconsistent about it, but this
seems to be the reason why half-float destinations are required to be
DWORD-aligned on BDW+ projects. This way the regioning lowering pass
will make sure that the destination components of W to HF and HF to W
conversions are aligned like the corresponding conversion operation
with 32-bit execution data type.
Tested-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
In some shaders, you can end up with a stride in the source of a
SHADER_OPCODE_MULH. One way this can happen is if the MULH is acting on
the top bits of a 64-bit value due to 64-bit integer lowering. In this
case, the compiler will produce something like this:
mul(8) acc0<1>UD g5<8,4,2>UD 0x0004UW { align1 1Q };
mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable };
The new region fixup pass looks at the MUL and sees a strided source and
unstrided destination and determines that the sequence is illegal. It
then attempts to fix the illegal stride by replacing the destination of
the MUL with a temporary and emitting a MOV into the accumulator:
mul(8) g9<2>UD g5<8,4,2>UD 0x0004UW { align1 1Q };
mov(8) acc0<1>UD g9<8,4,2>UD { align1 1Q };
mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable };
Unfortunately, this new sequence isn't correct because MOV accesses the
accumulator with a different precision to MUL and, instead of filling
the bottom 32 bits with the source and zeroing the top 32 bits, it
leaves the top 32 (or maybe 31) bits alone and full of garbage. When
the MACH comes along and tries to complete the multiplication, the
result is correct in the bottom 32 bits (which we throw away) and
garbage in the top 32 bits which are actually returned by MACH.
This commit does two things: First, it adds an assert to ensure that we
don't try to rewrite accumulator destinations of MUL instructions so we
can avoid this precision issue. Second, it modifies
required_dst_byte_stride to require a tightly packed stride so that we
fix up the sources instead and the actual code which gets emitted is
this:
mov(8) g9<1>UD g5<8,4,2>UD { align1 1Q };
mul(8) acc0<1>UD g9<8,8,1>UD 0x0004UW { align1 1Q };
mach(8) g6<1>UD g5<8,4,2>UD 0x00000004UD { align1 1Q AccWrEnable };
Fixes: efa4e4bc5f "intel/fs: Introduce regioning lowering pass"
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
For a long time, we based exec sizes on destination register widths.
We've not been doing that since 1ca3a94427 but a few remnants
accidentally remained.
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
The original idea was that the backend compiler could eliminate
surfaces, so we would have it mark which ones are actually used,
then shrink the binding table accordingly. Unfortunately, it's a
pretty blunt mechanism - it can only prune things from the end,
not the middle - since we decide the layout before we even start
the backend compiler, and only limit the size. It also basically
gives up if it sees indirect array access.
Besides, we do the vast majority of our surface elimination in NIR
anyway, not the backend - and I don't see that trend changing any
time soon. Vulkan abandoned this plan a long time ago, and I don't
use it in Iris, but it's still been kicking around in i965.
I hacked shader-db to print the binding table size in bytes, and
observed no changes with this patch. So, this code appears to do
nothing useful.
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
The num_components value passed into get_mul_for_src is used to only
compose the parts of the swizzle that we know will be used so we don't
compose invalid swizzle components. However, we had a bug where we
passed the number of components of the add all the way through. For the
given source, we need the number of components read from that source.
In the case where we have a narrow add, say 2 components, that is
sourced from a chain of wider instructions, we may not compose all the
swizzles. All we really need to do is pass through the right number of
components at each level.
Fixes: 2231cf0ba3 "nir: Fix output swizzle in get_mul_for_src"
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Shaders containing software implementations of double-precision
operations can be very large such that we cannot stack-allocate
an array of grf_count*16.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Shaders containing software implementations of double-precision
operations can be very large such that we have more the 2^16 virtual
registers during optimization.
Move the 'nr' field to the union containing the immediate storage and
expand it to 32-bits.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The next patch replaces an unsigned bitfield with a plain unsigned,
which triggers gcc to begin warning on signed/unsigned comparisons.
Keeping this patch separate from the actual move allows bisectablity and
generates no additional warnings temporarily.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
A follow on commit will move nr to the same union as the immediate
data, so we should assert these invariants before we overwrite the nr
field.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>