Given that a teximage that calls us with this flag set will immediately
proceed to allocate the other levels, we can probably just go ahead and
allocate those levels now.
Reduces miptree copies in piglit by about .05%.
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
If the caller shows up with GL_BASE_LEVEL != 0, it doesn't mean that the
texture will over the course of its lifetime have that nonzero baselevel,
it means that the caller is filling the texture from the bottom up for
some reason (one could imagine demand-loading detailed texture layers at
runtime, for example). If we allocate from just the current baselevel, it
means when they come along with the next level up, we'll have to allocate
a new miptree and copy all of our bits out of the first miptree.
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Let's say you started allocating your 2D texture with level 2 of a tree as
a 1x1 image. The driver doesn't know if this means that level 0 is 4x4 or
4x1 or 1x4, so we would just allocate a single 1x1 and let it get copied
in to the real location at texture validate time later.
Since this is just a temporary allocation that *will* get copied, the
extra space allocation of just taking the normal path which will happen to
producing a 4x1 level 0, 2x1 level 1, and 1x1 level 2 is the right way to
go, to reduce complexity in the normal case.
No change in miptree copies over the course of a piglit run.
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
This has no effect currently, because intel_finalize_mipmap_tree() always
makes mt->first_level == tObj->BaseLevel.
The change I made before to handle it
(b1080cfbdb) got very close to working, but
after fixing some unrelated bugs in the series, it still left
tex-miplevel-selection producing errors when testing textureLod(). The
problem is that for explicit LODs, the sampler's LOD clamping is ignored,
and only the surface's MIP clamping is respected. So we need to use
surface mip clamping, which applies on top of the sampler's mip clamping,
so the sampler change gets backed out.
Now actually tested with a non-regressing series producing a non-zero
computed baselevel.
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
We know that the object's mt is equal to the firstimage's mt because it's
gone through intel_finalize_mipmap_tree(). Saves a lookup of firstimage
on pre-gen7.
v2: Merge in the warning fix that appeared later in the series (noted by
Chad)
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
The function r600_choose_tiling is new and needs a review.
The only change in functionality is that it enables 2D tiling for compressed
textures on SI. It was probably accidentally turned off.
v2: don't make scanout buffers linear
More work needs to be done for this to be entirely shared with r600g.
I'm just trying to share r600_texture.c now.
The reason I put the implementation to si_descriptors.c is that the emit
function had already been there.
This doesn't fix any known issue (I haven't run piglit with this yet),
but the code was obviously completely wrong. It looks like copy-pasted from CMP.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
If the argument to emit_bool_to_cond_code() is an ir_expression, we
loop over the operands, calling accept() on each of them, which
generates assembly code to compute that subexpression. We then emit
one or two final instruction that perform the top-level operation on
those operands.
If it's not an expression (say, a boolean-valued variable), we simply
call accept() on the whole value.
In commit 80ecb8f1 (i965/fs: Avoid generating extra AND instructions on
bool logic ops), Eric made logic operations jump out of the expression
path to the non-expression path.
Unfortunately, this meant that we would first accept() the two operands,
skip generating any code that used them, then accept() the whole
expression, generating code for the operands a second time.
Dead code elimination would always remove the first set of redundant
operand assembly, since nothing actually used them. But we shouldn't
generate it in the first place.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
Ironlake's counters are always enabled; userspace can simply send a
MI_REPORT_PERF_COUNT packet to take a snapshot of them. This makes it
easy to implement.
The counters are documented in the source code for the intel-gpu-tools
intel_perf_counters utility.
v2: Adjust for core data structure changes. Add a table mapping buffer
object offsets to exposed counters (which changes each generation).
Finally, add report ID assertions to sanity check the BO layout
(thanks to Carl Worth).
v3: Update for core BeginPerfMonitor hook changes (requested by Brian).
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This provides an interface for applications (and OpenGL-based tools) to
access GPU performance counters. Since the exact performance counters
available vary between vendors and hardware generations, the extension
provides an API the application can use to get the names, types, and
minimum/maximum values of all available counters. Counters are also
organized into groups.
Applications create "performance monitor" objects, select the counters
they want to track, and Begin/End monitoring, much like OpenGL's query
API. Multiple monitors can be in flight simultaneously.
v2: Pass ctx to all driver hooks (suggested by Christoph), and attempt
to fix overallocation of bitsets (caught by Christoph). Incomplete.
v3: Significantly rework core data structures. Store counters in groups
rather than in a global list. Use their array index in the group's
counter list as the ID rather than trying to store a globally unique
counter ID. Use bitsets for active counters within a group, and
also track which groups are active so that's easy to query.
v4: Remove _mesa_ prefix on static functions; detect out of memory
conditions in new_performance_monitor(); make BeginPerfMonitor hook
return a boolean rather than setting m->Active or raising an error.
Switch to GLuint/unsigned for NumGroups, NumCounters, and
MaxActiveCounters (which also means switching a bunch of temporary
variable types). All suggested by Brian Paul. Also, remove
commented out code at the bottom of the block. Finally, fix the
dispatch sanity test (noticed by Ian Romanick).
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Brian Paul <brianp@vmware.com> [v3]
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
This is better than overriding the extension enable based on the
language version; it's robust against shaders that do:
#version 140
#extension GL_ARB_uniform_buffer_object : disable
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Explicit attribute locations are supported with GLSL 3.30, GLSL ES 3.00,
or "#extension GL_ARB_explicit_attrib_location: enable". Using a helper
function makes it easy to check for this.
This enables support in GLSL 3.30, which was previously missing.
Previously, we overrode the extension enable flag for ES 3.00. This is
not robust against a shader such as:
#version 330
#extension GL_ARB_explicit_attrib_location : disable
Disabling extensions should not remove core language functionality.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Hardware requires the magnitude of the largest component to not exceed
1; brw_cubemap_normalize ensures that this is the case.
Unfortunately, we would previously multiply the array index for cube
arrays by the normalization factor. The incorrect array index would then
cause the sampler to attempt to access either the wrong cube, or memory
outside the cube surface entirely, resulting in garbage rendering or in
the worst case, hangs.
Alter the normalization pass to only multiply the .xyz components.
Fixes broken rendering in the arb_texture_cube_map_array-cubemap piglit,
which was recently adjusted to provoke this behavior.
V2: Fix indent.
Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Cc: "9.2" mesa-stable@lists.freedesktop.org
Reviewed-by: Eric Anholt <eric@anholt.net>
Compress empty triangles (don't emit more than one in a row) and
never emit empty triangles if we already generated a triangle
covering a non-null area. We can't skip all null-triangles
because c_primitives expects ones that were generated from vertices
exactly at the clipping-plane, to be emitted.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We need to count the clipper primitives before the rasterizer
discards one it considers to be null.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We need to subdivide triangles if either of the dimensions is
larger than the max edge length, not when both of them are larger.
Signed-off-by: Zack Rusin <zackr@vmware.com>
Reviewed-by: José Fonseca <jfonseca@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The fix is at the end (TGSI_TEXTURE_SHADOWCUBE handling), but I also
restructured the code for it to be more readable.
Fixes spec/!OpenGL 3.0/sampler-cube-shadow.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
This fixes compressedteximage piglit tests.
+10 piglits
Evergreen and Cayman have the same issue. R600 and R700 don't.
Cc: "9.2" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
This fixes some piglits, e.g:
spec/!OpenGL 3.0/required-renderbuffer-attachment-formats.
This can be ported to r600g.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Only create one screen for each winsys instance.
This helps with buffer sharing and interop handling.
v2: rebased and some minor cleanup
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>