We should make the dest in the textureLod() operation have the same number
of components as the destination in the original textureGrad()
Fixes regression in ES3-CTS.gtf.GL3Tests.shadow
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99072
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This reverts commit 6aa730000f.
This was changing the size of the undef to always be 1 (the number of inputs
to imov and fmov) which is wrong, we could be moving a vec4 for example.
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
We want to perform the URB read to a vec4 temporary, with no writemask,
then issue a MOV to swizzle the data and store it to the actual
destination, using the final writemask.
We were doing this wrong. For example, let's say we wanted to read
a vec2 stored in components 2-3 of a vec4. We would generate a URB
read message of:
SEND <actual destination>.XY <header with mask set to XY>
MOV <actual destination>.XY <actual destination>.ZW
This doesn't work, because the URB message reads the .XY components
of the vec4, rather than the ZW. It writes to the right place, but
with the wrong data. Then the MOV comes along and overwrites it
with data that didn't even come from the URB at all.
Instead we want to do:
SEND <temporary> <header with mask set to ZW>
MOV <actual destination>.XY <temporary>.ZW
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Asking the DC for less than one cacheline (4 owords) of data for
uniform pull constants is suboptimal because the DC cannot request
less than that from L3, resulting in wasted bandwidth and unnecessary
message dispatch overhead, and exacerbating the IVB L3 serialization
bug. The following table summarizes the overall framerate improvement
(with statistical significance of 5% and sample size ~10) from the
whole series up to this patch for several benchmarks and hardware
generations:
| SKL | BDW | HSW
SynMark2 OglShMapPcf | 24.63% ±0.45% | 4.01% ±0.70% | 10.31% ±0.38%
GfxBench4 gl_manhattan31 | 5.93% ±0.35% | 3.92% ±0.31% | 6.62% ±0.22%
GfxBench4 gl_4 | 2.52% ±0.44% | 1.23% ±0.10% | N/A
Unigine Valley | 0.83% ±0.17% | 0.23% ±0.05% | 0.74% ±0.45%
Note that there are two versions of the Manhattan demo shipped with
GfxBench4, one of them is the original gl_manhattan demo which doesn't
use UBOs, so this patch will have no effect on it, and another one is
the gl_manhattan31 demo based on GL 4.3/GLES 3.1, which this patch
benefits as shown above.
I haven't observed any statistically significant regressions in the
benchmarks I have at hand. Note that the comparatively huge
improvement on SKL in the OglShMapPcf test case is due to the combined
effect of this patch and the register pressure benefit on SKL+ of
"i965/fs: Switch to the constant cache for uniform pull constants.",
part of the same series.
Going up to 8 oword blocks would improve performance of pull constants
even more, but at the cost of some additional bandwidth and register
pressure, so it would have to be done on-demand based on the number of
constants actually used by the shader.
v2: Fix for Gen4 and 5.
v3: Non-trivial rebase. Rework to allow the visitor specifiy
arbitrary pull constant block sizes.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Change the FS generator to ask the dataport for enough owords worth of
constants to fill the execution size of the instruction -- Which means
that the visitor now needs to set the execution size correctly for
uniform pull constant load instructions, which we were kind of
neglecting until now.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We'll need roughly the same logic in other places and it would be
annoying to duplicate it. Instead factor it out into a function-like
macro that takes the number of dwords per block (which will prove more
convenient than taking the same value in owords or some other unit).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This reverts to using the oword block read messages for uniform pull
constant loads, as used to be the case until
4c1fdae0a0. There are two important differences
though: Now the L3 cacheability bits are set up correctly for UBOs
(since 11f5d8a5d4), and we target the
constant cache instead of the data cache. The latter used to get no
L3 way allocation on boot on all platforms that existed at the time,
so oword read messages wouldn't get cached on L3 regardless of the
MOCS bits, what probably explains the apparent slowness of oword
fetches.
Constant cache loads seem to perform better than SIMD4x2 sampler loads
in a number of cases, they alleviate some of the cache thrashing
caused by the competition with textures for the L1/L2 sampler caches,
and they allow fetching up to 128B worth of constants with a single
oword fetch message.
Note that IVB devices suffer from a hardware bug that leads to
serialization of L3 read requests overlapping the same cacheline as
result of a (on IVB buggy) mechanism of the L3 to preserve coherency.
Since read requests for matching cachelines from any L3 client are not
pipelined, throughput may decrease in cases where there are no
non-overlapping requests left in the queue that can be processed
between them.
This situation should be relatively uncommon as long as we make sure
that we don't use the 1/2 oword messages in cases where the shader
intends to read from any other location of the same cacheline at some
other point. This is generally a good idea anyway on all generations
because using the 1 and 2 oword messages is expected to waste
bandwidth since the minimum L3 request size for the DC is exactly 4
owords (i.e. one cacheline). A future commit will have this effect.
I haven't been able to find any real-world example where this would
still result in a regression on IVB, but if someone happens to find
one it shouldn't be too difficult to add an IVB-specific check to have
it fall back to the sampler cache for pull constant loads.
Note that on SKL+ this change has the additional benefit of reducing
the register footprint of pull constant loads. The following table
summarizes the effect of the whole series on several shader-db stats:
Total instructions Total cycles
BWR: 4571248 -> 4568342 (-0.06%) 123375740 -> 123373296 (-0.00%)
ELK: 3989020 -> 3985402 (-0.09%) 98757068 -> 98754058 (-0.00%)
ILK: 6383591 -> 6376787 (-0.11%) 143649910 -> 143648914 (-0.00%)
SNB: 7528395 -> 7501446 (-0.36%) 103503796 -> 102460370 (-1.01%)
IVB: 6949221 -> 6943317 (-0.08%) 60592262 -> 60584422 (-0.01%)
HSW: 6409753 -> 6403702 (-0.09%) 60609070 -> 60604414 (-0.01%)
BDW: 8043467 -> 7976364 (-0.83%) 68427730 -> 68483042 (0.08%)
CHV: 8045019 -> 7977916 (-0.83%) 68297426 -> 68352756 (0.08%)
SKL: 8204037 -> 7939086 (-3.23%) 66583900 -> 65624378 (-1.44%)
Lost->Gained Total spills Total fills
BWR: 5 -> 5 1488 -> 1488 (0.00%) 1957 -> 1957 (0.00%)
ELK: 5 -> 5 1489 -> 1489 (0.00%) 1958 -> 1958 (0.00%)
ILK: 1 -> 4 1449 -> 1449 (0.00%) 1921 -> 1921 (0.00%)
SNB: 0 -> 0 549 -> 549 (0.00%) 52 -> 52 (0.00%)
IVB: 13 -> 3 1271 -> 1271 (0.00%) 1162 -> 1162 (0.00%)
HSW: 11 -> 0 1271 -> 1271 (0.00%) 1162 -> 1162 (0.00%)
BDW: 12 -> 0 1340 -> 1340 (0.00%) 1452 -> 1452 (0.00%)
CHV: 12 -> 0 1340 -> 1340 (0.00%) 1452 -> 1452 (0.00%)
SKL: 0 -> 120 1269 -> 375 (-70.45%) 1563 -> 690 (-55.85%)
v3: Non-trivial rebase.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
brw_set_dp_read_message already had a target_cache argument, but its
interpretation was rather convoluted (on Gen6 the render cache was
used if the caller asked for it, otherwise it was ignored using the
sampler cache instead), and the constant cache wasn't representable at
all. brw_set_dp_write_message used the data cache on Gen7+ except for
RENDER_TARGET_WRITE messages, in which case it would use the render
cache. On Gen6 the render cache was always used.
Instead of the above, provide the shared unit SFID that the caller
expects will be used. Makes no functional changes.
v3: Non-trivial rebase.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
In order to make sure that the constant cache is coherent with
previous rendering when we start using it for pull constant loads.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This makes Gen7/7.5 match Gen8-9.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
What we're really doing is copying a texture not blitting it in the sense
of glBlitFramebuffers. Also, the intel_miptree_copy function is capable of
properly handling compressed textures which intel_miptree_blit is not.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97473
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
v2: put enum directly in gl_API.xml (Ilia)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Suggested by Marek.
v2: Use new driver flag (Marek)
v3: Fix i965 comments (Lionel)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Even if lower_txd_cube_map isn't. Suggested by Ken to make the flag more
consistent with its name.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This gets the lowering on the Vulkan driver too, which is required for
hardware that does not have the sample_l_d message (up to IvyBridge).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This is ported from the Intel lowering pass that we use with GLSL IR.
This takes care of lowering texture gradients on shadow samplers other
than cube maps. Intel hardware requires this for gen < 8.
v2 (Ken):
- Use the helper function to retrieve ddx/ddy
- Swizzle away size components we are not interested in
v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
instead (Ken, Eric)
v4:
- Add a 'continue' statement if the lowering makes progress because it
replaces the original texture instruction
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
This is ported from the Intel lowering pass that we use with GLSL IR.
The NIR pass only handles cube maps, not shadow samplers, which are
also lowered for gen < 8 on Intel hardware. We will add support for
that in a later patch, at which point we should be able to remove
the GLSL IR lowering pass.
v2:
- added a helper to retrieve ddx/ddy parameters (Ken)
- No need to make size.z=1.0, we are only using component x anyway (Iago)
v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
instead (Ken, Eric)
v4:
- When emitting the textureLod operation, copy all texture parameters
from the original textureGrad() (except for ddx/ddy) using a loop
- Add a 'continue' statement if the lowering makes progress because it
replaces the original texture instruction
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
This was written specifically for RECT samplers. Make it more generic so
we can call this from the gradient lowerings too.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
git grep -l comparitor | xargs sed -i 's/comparitor/comparator/g'
Just happened to notice this in a patch that was sent and included one
of the tokens in question.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
In 19a541f (nir: Get rid of nir_constant_data) a number of places that
operated on nir_constant::values were mechanically converted to operate
on the whole array without regard for the base type. Only
GLSL_TYPE_FLOAT and GLSL_TYPE_DOUBLE can be matrices, so only those
types can have data in the non-0 array element.
See also b870394.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
Cc: Iago Toral Quiroga <itoral@igalia.com>
We shouldn't ever see a SEL with conditional mod other than GE (for max)
or L (for min), but we might see one with predication and no conditional
mod.
total instructions in shared programs: 8241806 -> 8241902 (0.00%)
instructions in affected programs: 13284 -> 13380 (0.72%)
HURT: 62
total cycles in shared programs: 84165104 -> 84166244 (0.00%)
cycles in affected programs: 75364 -> 76504 (1.51%)
helped: 10
HURT: 34
Fixes generated code in at least Sanctum 2, Borderlands 2, Goat
Simulator, XCOM: Enemy Unknown, and Shogun 2.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92234
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We can hardcode all of the fields for swizzling in the geometry shader.
The advantage is that we use fewer descriptor slots and we no longer have to
update any of the (ring) descriptors when the geometry shader changes.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Note that the memory layout of one vertex stream inside one "item" (= memory
written by one GS wave) on the GSVS ring is:
t0v0c0 ... t15v0c0 t0v1c0 ... t15v1c0 ... t0vLc0 ... t15vLc0
t0v0c1 ... t15v0c1 t0v1c1 ... t15v1c1 ... t0vLc1 ... t15vLc1
...
t0v0cL ... t15v0cL t0v1cL ... t15v1cL ... t0vLcL ... t15vLcL
t16v0c0 ... t31v0c0 t16v1c0 ... t31v1c0 ... t16vLc0 ... t31vLc0
t16v0c1 ... t31v0c1 t16v1c1 ... t31v1c1 ... t16vLc1 ... t31vLc1
...
t16v0cL ... t31v0cL t16v1cL ... t31v1cL ... t16vLcL ... t31vLcL
...
t48v0c0 ... t63v0c0 t48v1c0 ... t63v1c0 ... t48vLc0 ... t63vLc0
t48v0c1 ... t63v0c1 t48v1c1 ... t63v1c1 ... t48vLc1 ... t63vLc1
...
t48v0cL ... t63v0cL t48v1cL ... t63v1cL ... t48vLcL ... t63vLcL
where tNN indicates the thread number, vNN the vertex number (in the order of
EMIT_VERTEX), and cNN the output component (vL and cL are the last vertex and
component, respectively).
The vertex streams are laid out sequentially.
The swizzling by 16 threads is hard-coded in the way the VGT generates the
offset passed into the GS copy shader, and the jump every 16 threads is
calculated from VGT_GSVS_RING_OFFSET_n and VGT_GSVS_RING_ITEMSIZE in a way
that makes it difficult to deviate from this layout (at least that's what
I've experimentally confirmed on VI after first trying to go the simpler
route of just interleaving the vertex streams).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
SimplifyCFG generates a switch instruction anyway when all four streams
are present, but is simultaneously not smart enough to eliminate some
redundant jumps that it generates.
The generated assembly is still a bit silly, probably because the
control flow annotation doesn't know how to handle a switch with uniform
condition.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When running the copy shader for vertex streams != 0, the SX does not need
any data from us (there is no rasterization for the higher vertex streams,
only streamout).
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This affects for GS copy shaders. When an output is meant for vertex
stream != 0, then we don't have to make it available to the pixel
shader.
There is a minor inefficiency here because the GLSL varying packing pass
does not group varyings of the same vertex stream together, but it
shouldn't be important in practice.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>