Since 1 << 31 complains about undefined behaviour; the others are changed
only for consistency.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
No sure where the 36 came from, but we clearly need at least
48 bytes per attribute per primitive.
Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This will be used for resetting the uniform stream in the presence of
branching, but may also be useful as an optimization to reduce how many
uniforms we have to copy out per draw call (in exchange for increasing
icache pressure).
Seeing the expansion of a QPU_GET_FIELD in an assert isn't very
informative, and it's hard find what's going wrong without getting a dump
of the instruction that failed.
We should have already emitted a NOP due to the last instruction being a
TLB or VPM write. However, if you disable dead code elimination then you
might get dead code at the end, and that dead code might have the signal
bits set to something non-default, at which point you die in assertion
failure.
Like other resources, we need to unreference all images.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Vulkan always sets this. It only affects in-place Z decompression.
This is recommended for performance, but what app uses MSAA depth
texturing?
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Stacking frames is for driver that's capable to do dual instances
encoding. Such feature is not enabled for B frames currently.
Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Cc: "11.1 11.2" <mesa-stable@lists.freedesktop.org>
There are a few different fixups that we have to do for texture
destinations that re-arrange channels, fix hardware vs. API mismatches, or
just shrink the result to fit in the NIR destination. These were all being
done in a somewhat haphazard manner. This commit replaces all of the
shuffling with a single LOAD_PAYLOAD operation at the end and makes it much
easier to insert fixups between the texture instruction itself and the
LOAD_PAYLOAD.
Shader-db results on Haswell:
total instructions in shared programs: 6227035 -> 6226669 (-0.01%)
instructions in affected programs: 19119 -> 18753 (-1.91%)
helped: 85
HURT: 0
total cycles in shared programs: 56491626 -> 56476126 (-0.03%)
cycles in affected programs: 672420 -> 656920 (-2.31%)
helped: 92
HURT: 42
The fs_visitor::emit_texture helper originated when we still had both NIR
and IR visitors for the FS backend. Since the old visitor was removed,
emit_texture serves no real purpose beyond arbitrarily splitting
heavily-linked code across two functions.
Normally, we expect SIMD8 shaders to be more instructions than SIMD4x2
shaders, as it takes four instructions to operate on a vec4, rather than
a single instruction. However, the benefit is that it can process 8
objects per shader thread instead of 2.
Surprisingly, the shader-db statistics show an improvement in both
instruction and cycle counts:
Synmark: -31.25% instructions, -29.27% cycles, 0 hurt.
Tessmark: -36.92% instructions, -37.81% cycles, 0 hurt.
Unigine Heaven: -3.42% instructions, -17.95% cycles, 0 hurt.
Shadow of Mordor:
+13.24% instructions (26 with fewer instructions, 45 with more),
-5.23% cycles (44 with fewer cycles, 27 with more cycles).
Presumably, this is because the SIMD8 URB messages are a much more
natural fit than the SIMD4x2 URB messages - there's a ton less header
setup.
I benchmarked Shadow of Mordor and Unigine Heaven on my Skylake GT3e,
and the performance seems to be the same or increase ever so slightly
(< 1 FPS difference). So I believe it's strictly superior.
There's also a lot more optimization potential we can do in scalar mode.
This will also help us finish fp64 support, as scalar support is going
to land much sooner than vec4-mode support.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
There are a couple of cycle count changes in shader-db, but it's
basically a wash.
However, with the Broadwell scalar TCS backend enabled, many
Shadow of Mordor shaders benefit from this patch. Because we don't
batch up output writes for TCS, vec4 outputs might not have all
components defined. Many output writes have a value of undef,
which is useless.
With scalar TCS, stats for tessellation shaders on Broadwell:
total instructions in shared programs: 1283000 -> 1280444 (-0.20%)
instructions in affected programs: 34302 -> 31746 (-7.45%)
helped: 71
HURT: 0
total cycles in shared programs: 10798768 -> 10780682 (-0.17%)
cycles in affected programs: 158004 -> 139918 (-11.45%)
helped: 71
HURT: 0
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
shader-db statistics on Broadwell:
total instructions in shared programs: 8963409 -> 8962455 (-0.01%)
instructions in affected programs: 60858 -> 59904 (-1.57%)
helped: 318
HURT: 0
total cycles in shared programs: 71408022 -> 71406276 (-0.00%)
cycles in affected programs: 398416 -> 396670 (-0.44%)
helped: 199
HURT: 51
GAINED: 1
The only shaders affected were in Dota 2 Reborn.
It also sets up for the next optimization.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
This better reflects what it does. I plan to add other ALU
optimizations as well, so the old name would be confusing.
In preparation for that, also move the file comments about csels
above the opt_undef_csel function, and delete the ones about there
not being other optimizations.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
According to Timothy, using program_string_id == 0 to identify the
passthrough TCS is going to be problematic for his shader cache work.
So, change it to strcmp() the name at visitor creation time.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Fix windows in 32-bit mode when hyperthreading is disabled on Xeons.
Some support for asymmetric processor topologies.
Reviewed-by: Bruce Cherniak <bruce.cherniak@intel.com>
This tells LLVM to always use SMEM loads for descriptors. It fixes a
regression in piglit's arb_shader_storage_buffer_object/execution/indirect.shader_test
that was caused by LLVM r268259 (but the proper fix is really here in Mesa).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Beginning with commit 7b208a73, Unigine Valley began hanging the GPU on
Gen >= 8 platforms.
Evidently that commit allowed the scheduler to make different choices
that somehow finally ran afoul of a hardware bug in which POW and FDIV
instructions may not be followed by an instruction with two destination
registers (including compressed instructions). I presume the conditions
are more complex than that, but the internal hardware bug report (BDWGFX
bug_de 1696294) does not contain much more information.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94924
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> [v1]
Tested-by: Mark Janes <mark.a.janes@intel.com> [v1]
Reviewed-by: Francisco Jerez <currojerez@riseup.net>
When gathering query results, swr_gather_stats was
unnecessarily stalling the entire pipeline. Results are now
collected asynchronously, with a fence marking completion.
Reviewed-By: George Kyriazis <george.kyriazis@intel.com>