We're required to expose a host-visible, coherent memory type. On big
core GPUs that share, LLC, we can expose one such memory type that's
also cached. However, on non-LLC GPUs we can't both be cached and
coherent. Thus, we expose both the required coherent type and the cached
but non-coherent combination.
Regular objects are created I915_CACHING_CACHED on LLC platforms and
I915_CACHING_NONE on non-LLC platforms. However, userptr objects are
always created as I915_CACHING_CACHED, which on non-LLC means
snooped. That can be useful but comes with a bit of overheard. Since
we're eplicitly clflushing and don't want the overhead we need to turn
it off.
For MSAA, we store full resolution tile buffer contents, which have their
own tiling format. Since they're full resolution buffers, we have to
align their size to full tiles.
We were checking that the blit started at 0 and was 1:1, but not that it
went to the full width of the surface, or that the width was aligned to a
tile. We then told it to blit to the full width/height of the surface,
causing contents to be stomped in a bunch of MSAA tests that happen to
include half-screen-width blits to 0,0.
I've played with a few different approaches to tweak instruction
priority according to how much they increase/decrease register pressure,
etc. But nothing seems to change the fact that compared to original
(pre-multiple-block-support) scheduler, in some edge cases we are
generating shaders w/ 5-6x higher register usage.
The problem is that the priority queue approach completely looses the
dependency between instructions, and ends up scheduling all paths at the
same time.
Original reason for switching was that recursive approach relied on
starting from the shader outputs array. But we can achieve more or less
the same thing by starting from the depth-sorted list.
shader-db results:
total instructions in shared programs: 113350 -> 105183 (-7.21%)
total dwords in shared programs: 219328 -> 211168 (-3.72%)
total full registers used in shared programs: 7911 -> 7383 (-6.67%)
total half registers used in shader programs: 109 -> 109 (0.00%)
total const registers used in shared programs: 21294 -> 21294 (0.00%)
half full const instr dwords
helped 0 322 0 711 215
hurt 0 163 0 38 4
The shaders hurt tend to gain a register or two. While there are also a
lot of helped shaders that only loose a register or two, the more
complex ones tend to loose significanly more registers used. In some
more extreme cases, like glsl-fs-convolution-1.shader_test it is more
like 7 vs 34 registers!
Signed-off-by: Rob Clark <robclark@freedesktop.org>
It causes confusion in sched if we need to split_addr() since otherwise
we wouldn't easily know which block the new addr instr will be scheduled
in. So just side-step the whole situation.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
We'll need to add similar for ir3_instruction, but following the pattern
to use 'id' seems confusing. Let's just go w/ generic 'data' as the
name.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
Undefining the NDEBUG is relevant for release build, as they are the
ones that set it.
[Emil Velikov: split from previous patch]
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
This follows the src/util/u_atomic_test.c model of undefining NDEBUG
unconditionally throughouth the XvMC tests, to force asserts regardless
of debug mode.
The comment on u_atomic_test.c is also fixed (read 'debug' where it
should have been 'release').
v2: s/debug/release/ in relevant comments
Signed-off-by: Giuseppe Bilotta <giuseppe.bilotta@gmail.com>
[Emil Velikov: keep the src/util/ hunk as separate patch]
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Add missing `const` specifier for pointer pointing to a const struct.
Signed-off-by: Giuseppe Bilotta <giuseppe.bilotta@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Giuseppe Bilotta <giuseppe.bilotta@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
Should have been part of commit f53f9eb8d4 "glapi: add GetPointervKHR
to the ES dispatch".
v2: comment out the ES1.1 symbol and use the same description (pattern)
as elsewhere (Matt)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93235
Fixes: f53f9eb8d4 "glapi: add GetPointervKHR to the ES dispatch".
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Tested-by: Vinson Lee <vlee@freedesktop.org> (v1)
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
Since we're using nir_lower_outputs_to_temporaries to shadow all our
outputs, it's impossible to actually get an indirect store. The code we
had to "handle" this was pretty bogus as it created a register with a
reladdr and then stuffed it in a fixed varying slot without so much as a
MOV. Not only does this not do the MOV, it also puts the indirect on the
wrong side of the transaction. Let's just delete the broken dead code.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
It's not really buying us anything at this point. It's just a way of
remapping one offset namespace onto another. We can just use the location
namespace the whole way through.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
The original change to put zeroes directly into instructions created
conditional mov's with the zero immediate. However that can't be
emitted, so make sure to replace the zero with r63.
Fixes: 52a800a68 (nv50/ir: allow immediate 0 to be loaded anywhere)
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
A situation where there's a 128-bit load where the last component gets
DCE'd causes a 96-bit load to be generated, which no GPU can actually
emit. Avoid generating such instructions by scaling back to 64-bit on
the first load when splitting.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Cc: "11.0 11.1" <mesa-stable@lists.freedesktop.org>
This was just plain broken. It used always the value from v0 (for vp_index)
but would pass the value from the provoking vertex to later stages - but only
if there was a corresponding fs input, otherwise the layer/vp index would get
lost completely (as it would try to interpolate the (unsigned) values as
floats).
So, make it obey provoking vertex rules (drivers relying on draw will need to
do the same). And make sure that the default interpolation mode (when no
corresponding fs input is found) for them is constant.
Also, change the code a bit so constant inputs aren't interpolated then
copied over later.
Fixes the new piglit test gl-layer-render-clipped.
v2: more consistent whitespaces fixes for function defs, and more tab killing
(overall still not quite right however).
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Same as for llvmpipe, albeit softpipe only really handles multiple layers,
not multiple viewports/scissors.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
d3d10 actually requires using provoking (first) vertex. GL is happy with
any vertex (as long as we say it's undefined in the corresponding queries).
Up to now we actually used vertex 0 for viewport index, and vertex 1 for
layer (for tris), which really didn't make sense (probably a typo). Also,$
since we reorder vertices of clockwise triangle, that actually meant we used
a different vertex depending if the traingle was cw or ccw (still ok by gl).
However, it should be consistent with what draw (clip) does, and using
provoking vertex seems like the sensible choice (draw clip will be fixed
next as it is totally broken there).
While here, also use the correct viewport always even when not needed
in setup (we pass it down to jit fragment shader it might be needed there
for getting correct near/far depth values).
No piglit changes.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
We can't dump it in the real driver, since the kernel doesn't give us a
handle to it (except after a GPU hang, using a root ioctl). In the
simulator we can.
Pre-Skylake, RENDER_SUFFACE_STATE.SurfaceVerticalAlignment is in units
of surface samples. A surface sample is equivalent to a pixel in all
surfaces except interleaved multisample surfaces.
In Skylake, it is in units of surface elements. A surface element is
equivalent to a surface sample except for compressed formats, in which
case the element is a compression block.
In anv_image_create(), stop asserting that VkImageCreateInfo::extent
does not exceed the hardware limits for the given SURFTYPE. The
assertions were incorrect because they did not take into account the
hardware gen. Anyways, these types of assertions belong in isl, not
anvil.
Remove the surface layout calculations in anv_image_make_surface(). Let
isl_surf_init() do the heavy lifting.
Fixes 8 Crucible tests and regresses none. (hw=Broadwell and
crucible@33d91ec).
This is a big code push. The patch is about 3000 lines.
Function isl_surf_init() calculates the physical layout of a surface.
The implementation is "complete" (but untested) for all 1D, 2D, 3D, and
cube surfaces for gen4 through gen9, except:
* gen9 1D surfaces
* gen9 Ys multisampled surfaces
* auxiliary surfaces (such as hiz, mcs, ccs)
Rename legacy Y tiling from ISL_TILING_Y to ISL_TILING_Y0 in order to
clearly distinguish it from Yf and Ys. Using ISL_TILING_Y to denote
legacy Y tiling would lead to confusion with i965, because i965 uses
I195_TILE_Y to denote *any* Y tiling.