Commit graph

80360 commits

Author SHA1 Message Date
Iago Toral Quiroga
47351b843a nir/lower_tex: fix number of components in replace_gradient_with_lod()
We should make the dest in the textureLod() operation have the same number
of components as the destination in the original textureGrad()

Fixes regression in ES3-CTS.gtf.GL3Tests.shadow

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99072
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-15 08:34:55 +01:00
Timothy Arceri
a5502a721f Revert "nir: Turn imov/fmov of undef into undef."
This reverts commit 6aa730000f.

This was changing the size of the undef to always be 1 (the number of inputs
to imov and fmov) which is wrong, we could be moving a vec4 for example.

Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-15 17:05:12 +11:00
Kenneth Graunke
84e19322d3 i965/vec4: Fix TCS output reads with non-zero component qualifiers.
We want to perform the URB read to a vec4 temporary, with no writemask,
then issue a MOV to swizzle the data and store it to the actual
destination, using the final writemask.

We were doing this wrong.  For example, let's say we wanted to read
a vec2 stored in components 2-3 of a vec4.  We would generate a URB
read message of:

   SEND <actual destination>.XY <header with mask set to XY>
   MOV <actual destination>.XY <actual destination>.ZW

This doesn't work, because the URB message reads the .XY components
of the vec4, rather than the ZW.  It writes to the right place, but
with the wrong data.  Then the MOV comes along and overwrites it
with data that didn't even come from the URB at all.

Instead we want to do:

   SEND <temporary> <header with mask set to ZW>
   MOV <actual destination>.XY <temporary>.ZW

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-14 21:15:39 -08:00
Francisco Jerez
fd3120d85c i965/disasm: Decode dataport constant cache control fields.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:27 -08:00
Francisco Jerez
23caf75182 i965/fs: Remove the FS_OPCODE_SET_SIMD4X2_OFFSET virtual opcode.
Not used anymore.  It was just a scalar MOV.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:27 -08:00
Francisco Jerez
e014058195 i965/fs: Drop useless access mode override from pull constant generator code.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:27 -08:00
Francisco Jerez
b56fa830c6 i965/fs: Fetch one cacheline of pull constants at a time.
Asking the DC for less than one cacheline (4 owords) of data for
uniform pull constants is suboptimal because the DC cannot request
less than that from L3, resulting in wasted bandwidth and unnecessary
message dispatch overhead, and exacerbating the IVB L3 serialization
bug.  The following table summarizes the overall framerate improvement
(with statistical significance of 5% and sample size ~10) from the
whole series up to this patch for several benchmarks and hardware
generations:

                         | SKL           | BDW          | HSW
SynMark2 OglShMapPcf     | 24.63% ±0.45% | 4.01% ±0.70% | 10.31% ±0.38%
GfxBench4 gl_manhattan31 |  5.93% ±0.35% | 3.92% ±0.31% |  6.62% ±0.22%
GfxBench4 gl_4           |  2.52% ±0.44% | 1.23% ±0.10% |      N/A
Unigine Valley           |  0.83% ±0.17% | 0.23% ±0.05% |  0.74% ±0.45%

Note that there are two versions of the Manhattan demo shipped with
GfxBench4, one of them is the original gl_manhattan demo which doesn't
use UBOs, so this patch will have no effect on it, and another one is
the gl_manhattan31 demo based on GL 4.3/GLES 3.1, which this patch
benefits as shown above.

I haven't observed any statistically significant regressions in the
benchmarks I have at hand.  Note that the comparatively huge
improvement on SKL in the OglShMapPcf test case is due to the combined
effect of this patch and the register pressure benefit on SKL+ of
"i965/fs: Switch to the constant cache for uniform pull constants.",
part of the same series.

Going up to 8 oword blocks would improve performance of pull constants
even more, but at the cost of some additional bandwidth and register
pressure, so it would have to be done on-demand based on the number of
constants actually used by the shader.

v2: Fix for Gen4 and 5.
v3: Non-trivial rebase.  Rework to allow the visitor specifiy
    arbitrary pull constant block sizes.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:27 -08:00
Francisco Jerez
9b22a0d295 i965/fs: Expose arbitrary pull constant load sizes to the IR.
Change the FS generator to ask the dataport for enough owords worth of
constants to fill the execution size of the instruction -- Which means
that the visitor now needs to set the execution size correctly for
uniform pull constant load instructions, which we were kind of
neglecting until now.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:26 -08:00
Francisco Jerez
7a6aadb76f i965: Factor out oword block read and write message control calculation.
We'll need roughly the same logic in other places and it would be
annoying to duplicate it.  Instead factor it out into a function-like
macro that takes the number of dwords per block (which will prove more
convenient than taking the same value in owords or some other unit).

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:26 -08:00
Francisco Jerez
ad38ba1134 i965/fs: Switch to the constant cache for uniform pull constants.
This reverts to using the oword block read messages for uniform pull
constant loads, as used to be the case until
4c1fdae0a0.  There are two important differences
though: Now the L3 cacheability bits are set up correctly for UBOs
(since 11f5d8a5d4), and we target the
constant cache instead of the data cache.  The latter used to get no
L3 way allocation on boot on all platforms that existed at the time,
so oword read messages wouldn't get cached on L3 regardless of the
MOCS bits, what probably explains the apparent slowness of oword
fetches.

Constant cache loads seem to perform better than SIMD4x2 sampler loads
in a number of cases, they alleviate some of the cache thrashing
caused by the competition with textures for the L1/L2 sampler caches,
and they allow fetching up to 128B worth of constants with a single
oword fetch message.

Note that IVB devices suffer from a hardware bug that leads to
serialization of L3 read requests overlapping the same cacheline as
result of a (on IVB buggy) mechanism of the L3 to preserve coherency.
Since read requests for matching cachelines from any L3 client are not
pipelined, throughput may decrease in cases where there are no
non-overlapping requests left in the queue that can be processed
between them.

This situation should be relatively uncommon as long as we make sure
that we don't use the 1/2 oword messages in cases where the shader
intends to read from any other location of the same cacheline at some
other point.  This is generally a good idea anyway on all generations
because using the 1 and 2 oword messages is expected to waste
bandwidth since the minimum L3 request size for the DC is exactly 4
owords (i.e. one cacheline).  A future commit will have this effect.
I haven't been able to find any real-world example where this would
still result in a regression on IVB, but if someone happens to find
one it shouldn't be too difficult to add an IVB-specific check to have
it fall back to the sampler cache for pull constant loads.

Note that on SKL+ this change has the additional benefit of reducing
the register footprint of pull constant loads.  The following table
summarizes the effect of the whole series on several shader-db stats:

     Total instructions          Total cycles
BWR: 4571248 -> 4568342 (-0.06%) 123375740 -> 123373296 (-0.00%)
ELK: 3989020 -> 3985402 (-0.09%)  98757068 -> 98754058 (-0.00%)
ILK: 6383591 -> 6376787 (-0.11%) 143649910 -> 143648914 (-0.00%)
SNB: 7528395 -> 7501446 (-0.36%) 103503796 -> 102460370 (-1.01%)
IVB: 6949221 -> 6943317 (-0.08%)  60592262 -> 60584422 (-0.01%)
HSW: 6409753 -> 6403702 (-0.09%)  60609070 -> 60604414 (-0.01%)
BDW: 8043467 -> 7976364 (-0.83%)  68427730 -> 68483042 (0.08%)
CHV: 8045019 -> 7977916 (-0.83%)  68297426 -> 68352756 (0.08%)
SKL: 8204037 -> 7939086 (-3.23%)  66583900 -> 65624378 (-1.44%)

     Lost->Gained Total spills          Total fills
BWR:  5 ->   5    1488 -> 1488 (0.00%)  1957 -> 1957 (0.00%)
ELK:  5 ->   5    1489 -> 1489 (0.00%)  1958 -> 1958 (0.00%)
ILK:  1 ->   4    1449 -> 1449 (0.00%)  1921 -> 1921 (0.00%)
SNB:  0 ->   0     549 -> 549 (0.00%)     52 -> 52 (0.00%)
IVB: 13 ->   3    1271 -> 1271 (0.00%)  1162 -> 1162 (0.00%)
HSW: 11 ->   0    1271 -> 1271 (0.00%)  1162 -> 1162 (0.00%)
BDW: 12 ->   0    1340 -> 1340 (0.00%)  1452 -> 1452 (0.00%)
CHV: 12 ->   0    1340 -> 1340 (0.00%)  1452 -> 1452 (0.00%)
SKL:  0 -> 120    1269 -> 375 (-70.45%) 1563 -> 690 (-55.85%)

v3: Non-trivial rebase.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:26 -08:00
Francisco Jerez
3c78d31374 i965: Let the caller of brw_set_dp_write/read_message control the target cache.
brw_set_dp_read_message already had a target_cache argument, but its
interpretation was rather convoluted (on Gen6 the render cache was
used if the caller asked for it, otherwise it was ignored using the
sampler cache instead), and the constant cache wasn't representable at
all.  brw_set_dp_write_message used the data cache on Gen7+ except for
RENDER_TARGET_WRITE messages, in which case it would use the render
cache.  On Gen6 the render cache was always used.

Instead of the above, provide the shared unit SFID that the caller
expects will be used.  Makes no functional changes.

v3: Non-trivial rebase.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:26 -08:00
Francisco Jerez
591e14ec08 i965/gen6+: Invalidate constant cache on brw_emit_mi_flush().
In order to make sure that the constant cache is coherent with
previous rendering when we start using it for pull constant loads.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-14 16:50:26 -08:00
Kenneth Graunke
e0c1ec3b09 genxml: Make Gen8 3DSTATE_DS SIMD8 enable work like Gen9+.
This will let us avoid ifdefs.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2016-12-14 14:59:06 -08:00
Kenneth Graunke
000b563a1b genxml: Rename "DS Function Enable" to "Function Enable".
This makes Gen7/7.5 match Gen8-9.

Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2016-12-14 14:59:06 -08:00
Chad Versace
72ffe8318d anv: Reject VkMemoryAllocateInfo::allocationSize == 0
The Vulkan 1.0.33 spec says "allocationSize must be greater than 0".

Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
2016-12-14 12:04:58 -08:00
Chad Versace
5e97b8f5ce egl: Fix crashes in eglCreate*Surface()
Don't dereference a null EGLDisplay.

Fixes tests
  dEQP-EGL.functional.negative_api.create_pbuffer_surface
  dEQP-EGL.functional.negative_api.create_pixmap_surface

Reviewed-by: Mark Janes <mark.a.janes@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=99038
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-14 12:03:15 -08:00
Jason Ekstrand
b18cd8ce2c i965/miptree: Use intel_miptree_copy for maps
What we're really doing is copying a texture not blitting it in the sense
of glBlitFramebuffers.  Also, the intel_miptree_copy function is capable of
properly handling compressed textures which intel_miptree_blit is not.

Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97473
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-13 15:48:34 -08:00
Jason Ekstrand
157971e450 i965/blit: Fix the src dimension sanity check in miptree_copy
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
Cc: "13.0" <mesa-stable@lists.freedesktop.org>
2016-12-13 15:48:13 -08:00
Lionel Landwerlin
60330d730b main: add INTEL_conservative_rasterization enum query support
v2: add extra parameter (Ilia)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2016-12-13 16:27:59 +00:00
Lionel Landwerlin
d4b753a50b glapi: add missing INTEL_conservative_rasterization
v2: put enum directly in gl_API.xml (Ilia)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2016-12-13 16:27:56 +00:00
Lionel Landwerlin
47285d4602 extensions: update INTEL_conservative_rasterization dependencies
Suggested by Ilia.

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2016-12-13 16:27:54 +00:00
Lionel Landwerlin
300d96a433 main: don't error when enabling conservative rasterization on gles
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
2016-12-13 16:27:51 +00:00
Lionel Landwerlin
9854a3ba8b main: use new driver flag for conservative rasterization state
Suggested by Marek.

v2: Use new driver flag (Marek)

v3: Fix i965 comments (Lionel)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-13 16:27:33 +00:00
Iago Toral Quiroga
da3389a331 nir/lower_tex: lower gradients on shadow cube maps if lower_txd_shadow is set
Even if lower_txd_cube_map isn't. Suggested by Ken to make the flag more
consistent with its name.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-13 10:33:29 +01:00
Iago Toral Quiroga
44873ad0a4 i965: remove brw_lower_texture_gradients
This has been ported to NIR now so we don'tneed to keep the GLSL IR
lowering any more.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-13 10:33:20 +01:00
Iago Toral Quiroga
77f65b3b64 i965/nir: enable lowering of texture gradient for shadow samplers
This gets the lowering on the Vulkan driver too, which is required for
hardware that does not have the sample_l_d message (up to IvyBridge).

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-13 10:33:14 +01:00
Iago Toral Quiroga
5be2e785b1 nir/lower_tex: add lowering for texture gradient on shadow samplers
This is ported from the Intel lowering pass that we use with GLSL IR.
This takes care of lowering texture gradients on shadow samplers other
than cube maps. Intel hardware requires this for gen < 8.

v2 (Ken):
 - Use the helper function to retrieve ddx/ddy
 - Swizzle away size components we are not interested in

v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
  instead (Ken, Eric)

v4:
- Add a 'continue' statement if the lowering makes progress because it
  replaces the original texture instruction

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
2016-12-13 10:32:52 +01:00
Iago Toral Quiroga
f90da64fc6 i965/nir: enable lowering of texture gradient for cube maps
This gets the lowering on the Vulkan driver too.

Fixes Vulkan CTS cube map texture gradient tests in:
dEQP-VK.glsl.texture_functions.texturegrad.*

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-13 10:32:46 +01:00
Iago Toral Quiroga
a8e740c354 nir/lower_tex: add lowering for texture gradient on cube maps
This is ported from the Intel lowering pass that we use with GLSL IR.
The NIR pass only handles cube maps, not shadow samplers, which are
also lowered for gen < 8 on Intel hardware. We will add support for
that in a later patch, at which point we should be able to remove
the GLSL IR lowering pass.

v2:
- added a helper to retrieve ddx/ddy parameters (Ken)
- No need to make size.z=1.0, we are only using component x anyway (Iago)

v3:
- Get rid of the ddx/ddy helper and use nir_tex_instr_src_index
  instead (Ken, Eric)

v4:
- When emitting the textureLod operation, copy all texture parameters
  from the original textureGrad() (except for ddx/ddy) using a loop
- Add a 'continue' statement if the lowering makes progress because it
  replaces the original texture instruction

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> (v3)
2016-12-13 10:32:00 +01:00
Iago Toral Quiroga
bac303c286 nir/lower_tex: generalize get_texture_size()
This was written specifically for RECT samplers. Make it more generic so
we can call this from the gradient lowerings too.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2016-12-13 10:31:38 +01:00
Ilia Mirkin
fd249c803e treewide: s/comparitor/comparator/
git grep -l comparitor | xargs sed -i 's/comparitor/comparator/g'

Just happened to notice this in a patch that was sent and included one
of the tokens in question.

Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Acked-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
2016-12-12 22:13:07 -05:00
Ian Romanick
a0ce9ff8c4 nir: Only float and double types can be matrices
In 19a541f (nir: Get rid of nir_constant_data) a number of places that
operated on nir_constant::values were mechanically converted to operate
on the whole array without regard for the base type.  Only
GLSL_TYPE_FLOAT and GLSL_TYPE_DOUBLE can be matrices, so only those
types can have data in the non-0 array element.

See also b870394.

Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Jason Ekstrand <jason.ekstrand@intel.com>
Cc: Iago Toral Quiroga <itoral@igalia.com>
2016-12-12 17:17:12 -08:00
Tim Rowley
75149088be swr: [rasterizer core/memory] StoreTile: AVX512 progress
Fixes to 128-bit formats.

Reviwed-by: Bruce Cherniak <bruce.cherniak@intel.com>
2016-12-12 17:52:39 -06:00
Matt Turner
ac6646129f nir: Move fsat outside of fmin/fmax if second arg is 0 to 1.
instructions in affected programs: 550 -> 544 (-1.09%)
helped: 6

cycles in affected programs: 6952 -> 6850 (-1.47%)
helped: 6

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-12 12:39:27 -08:00
Matt Turner
7bed52bb5f i965/fs: Reject copy propagation into SEL if not min/max.
We shouldn't ever see a SEL with conditional mod other than GE (for max)
or L (for min), but we might see one with predication and no conditional
mod.

total instructions in shared programs: 8241806 -> 8241902 (0.00%)
instructions in affected programs: 13284 -> 13380 (0.72%)
HURT: 62

total cycles in shared programs: 84165104 -> 84166244 (0.00%)
cycles in affected programs: 75364 -> 76504 (1.51%)
helped: 10
HURT: 34

Fixes generated code in at least Sanctum 2, Borderlands 2, Goat
Simulator, XCOM: Enemy Unknown, and Shogun 2.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92234
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-12 12:38:55 -08:00
Matt Turner
091a8a04ad i965/fs: Add unit tests for copy propagation pass.
Pretty basic, but it's a start.

Acked-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-12 12:38:50 -08:00
Matt Turner
6014da50ec i965/fs: Rename opt_copy_propagate -> opt_copy_propagation.
Matches the vec4 backend, cmod propagation, and saturate propagation.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2016-12-12 12:38:43 -08:00
Nicolai Hähnle
ec0a0a60cc radeonsi: shrink the GSVS ring to account for the reduced item sizes
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:05:17 +01:00
Nicolai Hähnle
6fdef7d265 radeonsi: shrink each vertex stream to the actually required size
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:05:13 +01:00
Nicolai Hähnle
2f2e941e2d radeonsi: use a single descriptor for the GSVS ring
We can hardcode all of the fields for swizzling in the geometry shader.

The advantage is that we use fewer descriptor slots and we no longer have to
update any of the (ring) descriptors when the geometry shader changes.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:05:05 +01:00
Nicolai Hähnle
18616e7551 radeonsi: pack GS output components for each vertex stream contiguously
Note that the memory layout of one vertex stream inside one "item" (= memory
written by one GS wave) on the GSVS ring is:

  t0v0c0 ... t15v0c0 t0v1c0 ... t15v1c0 ... t0vLc0 ... t15vLc0
  t0v0c1 ... t15v0c1 t0v1c1 ... t15v1c1 ... t0vLc1 ... t15vLc1
                        ...
  t0v0cL ... t15v0cL t0v1cL ... t15v1cL ... t0vLcL ... t15vLcL
  t16v0c0 ... t31v0c0 t16v1c0 ... t31v1c0 ... t16vLc0 ... t31vLc0
  t16v0c1 ... t31v0c1 t16v1c1 ... t31v1c1 ... t16vLc1 ... t31vLc1
                        ...
  t16v0cL ... t31v0cL t16v1cL ... t31v1cL ... t16vLcL ... t31vLcL

                        ...

  t48v0c0 ... t63v0c0 t48v1c0 ... t63v1c0 ... t48vLc0 ... t63vLc0
  t48v0c1 ... t63v0c1 t48v1c1 ... t63v1c1 ... t48vLc1 ... t63vLc1
                        ...
  t48v0cL ... t63v0cL t48v1cL ... t63v1cL ... t48vLcL ... t63vLcL

where tNN indicates the thread number, vNN the vertex number (in the order of
EMIT_VERTEX), and cNN the output component (vL and cL are the last vertex and
component, respectively).

The vertex streams are laid out sequentially.

The swizzling by 16 threads is hard-coded in the way the VGT generates the
offset passed into the GS copy shader, and the jump every 16 threads is
calculated from VGT_GSVS_RING_OFFSET_n and VGT_GSVS_RING_ITEMSIZE in a way
that makes it difficult to deviate from this layout (at least that's what
I've experimentally confirmed on VI after first trying to go the simpler
route of just interleaving the vertex streams).

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:05:00 +01:00
Nicolai Hähnle
edf034ac14 radeonsi: do not write non-existent components through the GSVS ring
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:58 +01:00
Nicolai Hähnle
af976f12a5 radeonsi: only write values belonging to the stream when emitting GS vertex
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:54 +01:00
Nicolai Hähnle
bdf1bf1cb5 radeonsi: generate an explicit switch instruction over vertex streams
SimplifyCFG generates a switch instruction anyway when all four streams
are present, but is simultaneously not smart enough to eliminate some
redundant jumps that it generates.

The generated assembly is still a bit silly, probably because the
control flow annotation doesn't know how to handle a switch with uniform
condition.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:49 +01:00
Nicolai Hähnle
bae929f96e radeonsi: fetch only outputs of current vertex stream from the GSVS ring
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:46 +01:00
Nicolai Hähnle
dfb69cac33 radeonsi: only export from GS copy shader for vertex stream 0
When running the copy shader for vertex streams != 0, the SX does not need
any data from us (there is no rasterization for the higher vertex streams,
only streamout).

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:43 +01:00
Nicolai Hähnle
21f2bb22a3 radeonsi: do not export VS outputs from vertex streams != 0
This affects for GS copy shaders. When an output is meant for vertex
stream != 0, then we don't have to make it available to the pixel
shader.

There is a minor inefficiency here because the GLSL varying packing pass
does not group varyings of the same vertex stream together, but it
shouldn't be important in practice.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:36 +01:00
Nicolai Hähnle
fc0e009aa7 radeonsi: pull iteration over vertex streams into GS copy shader logic
The iteration is not needed for normal vertex shaders.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:33 +01:00
Nicolai Hähnle
180ae18ec5 radeonsi: group streamout writes by vertex stream
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:30 +01:00
Nicolai Hähnle
d89592836a radeonsi: load the streamout buf descriptors closer to their use
LLVM can still decide to hoist the loads since they're marked invariant.

Reviewed-by: Marek Olšák <marek.olsak@amd.com>
2016-12-12 09:04:27 +01:00