R600_DUMP_SHADERS environment var now allows to choose dump method:
0 (default) - no dump
1 - full dump (old dump)
2 - disassemble
3 - both
v2: fix output for burst_count > 1
v3: use more human-readable output for kcache data in CF_ALU_xxx clauses,
improve output for ALU_EXTENDED, other minor fixes
Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
v3: added some flags including condition codes for ALU,
fixed issue with CF reverse lookup (overlapping ranges of CF_ALU_xxx
and other CF instructions)
rebased on current master
Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
We are now seing cs that can go over the vram+gtt size to avoid
failing flush early cs that goes over 70% (gtt+vram) usage. 70%
is use to allow some fragmentation.
The idea is to compute a gross estimate of memory requirement of
each draw call. After each draw call, memory will be precisely
accounted. So the uncertainty is only on the current draw call.
In practice this gave very good estimate (+/- 10% of the target
memory limit).
v2: Remove left over from testing version, remove useless NULL
checking. Improve commit message.
v3: Add comment to code on memory accounting precision
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
This reverts commit 2906e2034c.
Fixes a regression in the glean depthStencil test.
Reverting this does not affect any tests in es3conform, so a more recent
patch must have also fixed the failure this one was intended to fix.
Reported-by: lu hua <huax.lu@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59494
v2: Update to handle BufferSize being -1 and return a NULL sampler
view if the specified range would cause out of bounds access.
Reviewed-by: Brian Paul <brianp@vmware.com>
Acked-by: Ian Romanick <ian.d.romanick@intel.com>
v2: Record texObj.BufferSize as -1 in TexBuffer(non-Range) instead
of the buffer's current size so we know we always have to use the
full size of the buffer object (i.e. even if it changes without the
user calling TexBuffer again) for the texture.
Clarify invalid offset alignment error message.
v3: Use extra GL_CORE-only section in get_hash_params.py for
TEXTURE_BUFFER_OFFSET_ALIGNMENT.
v4: Remove unnecessary check for profile in _mesa_TexBufferRange.
Add check for extension enable in get_tex_level_parameter_buffer.
v5: Fix position in gl_API.xml.
Add comment about meaning of BufferSize == -1.
v6: Add back checks for core profile and add a note about it.
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Check that the return value from xcb_dri2_swap_buffers_reply is
non-NULL before accessing the struct members.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Brian Paul <brianp@vmware.com>
S8_UNORM was inadvertedly supported together with Z16_UNORM.
I tried to update the code to accomodate stencil-only -- it seemed a simple
thing to do -- but "fbo-stencil clear GL_STENCIL_INDEX8" still fails,
and it's not worth debugging.
Therefore although this change tries to update for S8_UNORM, it also
disables it completely.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
This special path hadn't been exercised by my earlier testing, and mask
values weren't being properly truncated to match the values.
This change fixes that.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The struct padding got broken by c789b981b2.
This caused serious performance regression because part of the key was
uninitialized and hence the shader always recompiled (at least on release
builds...).
While here also fix key size calculation when the number of samplers
and the number of sampler views are different.
v2: add comment
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
We never really have multisampling with one sample per pixel.
See also http://bugs.freedesktop.org/show_bug.cgi?id=59873
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
The swrast fragment program interpreter has trouble computing the
right texture LOD because it doesn't have easy access to input
derivatives. This causes the GLSL-based meta generate mipmap code
to fetch texels from the wrong mipmap level.
One possible fix would be to set the GL_TEXTURE_MIN/MAX_LOD parameters
to limit sampling from the right level. But let's just use the
_mesa_generate_mipmap() fallback since it's a lot faster than using
the fragment shader interpreter.
Fixes http://bugs.freedesktop.org/show_bug.cgi?id=54240
Note: This is a candidate for the 9.0 branch.
Reviewed-by: José Fonseca <jfonseca@vmware.com>
The gallium docs for pipe_screen::is_format_supported() says that
samples==0 or samples==1 both mean that multisampling is not supported.
Return GL_MAX_SAMPLES==0 instead of 1 for consistency with other drivers.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Marek Olšák <maraeo@gmail.com>
Simply by adjusting the vector element width after/before
reading/writing the depth-stencil values.
Ran several GL_DEPTH_COMPONENT16 piglit tests without regressions.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
The maximum number of URB entries come from the 3DSTATE_URB_VS and
3DSTATE_URB_GS state packet documentation; the thread count information
comes from the 3DSTATE_VS and 3DSTATE_PS state packet documentation.
NOTE: This is a candidate for the 9.0 branch.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
The packet length may change at some point in the future. Specifying it
explicitly (rather than hardcoding it in the command #define) allows us
to change it much more easily in the future.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Eric Anholt <eric@anholt.net>
I did not list the *_get_program_binary extensions since they're not
useful to anyone with their current implementation (that supports 0
binary formats).
Before, we were keeping a CPU-only buffer to accumulate the batchbuffer in,
which was an improvement over mapping the batch through the GTT directly
(since any readback or other failure to stream through write combining
correctly would hurt). However, on LLC-sharing architectures we can do better
by mapping the batch directly, which reduces the cache footprint of the
application since we no longer have this extra copy of a batchbuffer around.
Improves performance of GLBenchmark 2.1 offscreen on IVB by 3.5% +/- 0.4%
(n=21). Improves Lightsmark performance by 1.1 +/- 0.1% (n=76). Improves
cairo-gl performance by 1.9% +/- 1.4% (n=57).
No statistically significant difference in GLB2.1 on SNB (n=37). Improves
cairo-gl performance by 2.1% +/- 0.1% (n=278).
Fixes side effect in assertion defects reported by Coverity.
Note: This is a candidate for the 9.1 branch.
Signed-off-by: Vinson Lee <vlee@freedesktop.org>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Currently a gralloc internal structure is exposed to Mesa,
Use a query function instead to maintain ABI compatibility.
Signed-off-by: Tapani Pälli <tapani.palli@intel.com>
Reviewed-by: Chad Versace <chad.versace@linux.intel.com>
Old kernel do not have dma support, patch pushed were missing some
of the check needed to not use dma.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Only drivers supporting DRI2 version >=4 support GLX_INTEL_swap_event.
So lets mark it as such otherwise applications which use this extension
(i.e. everything based on Clutter, e.g. gnome-shell) break horribly on
drivers supporting DRI2 versions only up to 3.
Note: This is a candidate for the 9.0 branch.
Reviewed-by: Brian Paul <brianp@vmware.com>
Get rid of special handling for reserved regs.
Use one intrinsic for all kinds of interpolation.
v2[Vincent Lejeune]: Rebased against current master
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Vadim Girlin <vadimgirlin@gmail.com>
r600_bytecode::ar_chan stores the register channel for the value that
will be loaded into the AR register.
At the moment, this field is only used by the LLVM backend. The default
backend always sets ar_chan = 0.
It shouldn't be needed and older kernels don't support
it.
v2: Replace with PS partial flush as before.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=59945
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Marek Olšák <maraeo@gmail.com>
We keep track of ring emission order in a stack, whenever we need to
flush we empty the stack in a fifo order. There is few helpers function
for bo mapping and other ring activities that will make sure that
the ring stack is properly flush and submitted.
v2: fix st flush path, and other flush path to properly flush all
rings if necessary
v3: - improve name of ring helpers
- make sure that each time a cs is gona be written it endup at
top of the stack to avoid any issue such as :
STACK[0] = dma (withbo A,B)
STACK[1] = gfx (withbo C,D)
Now if code try to emit a dma command relative to bo C or D
it will start writting cmd stream into the cs and once it
reach the point where it adds relocation it will flush.
At that point the cs will have cmd that don't have proper
relocation into the relocation buffer and kernel will just
refuse to run.
v4: - Drop the stack idea as it turn out there is no way to use it
or benefit from it. Any time the driver start command on other
ring, it always need to flush the previous ring. So make code
simpler by not using a stack.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Add ring support, you can create a cs for each ring. DMA ring is
bit special regarding relocation as you must emit as much relocation
as there is use of the buffer.
v2: - Improved comment on relocation changes
- Use a single thread to queue cs submittion this simplify driver
code while not impacting performances. Rational for this is that
you have to wait for all previous submission to have completed
so there was never a case while we could have 2 different thread
submitting a command stream at the same time. This code just
consolidate submission into one single thread per winsys.
v3: - Do not use semaphore for empty queue signaling, instead use
cond var. This is because it's tricky to maintain an even number
of call to semaphore wait and semaphore signal (the number of
cs in the stack would for instance make that number vary).
Signed-off-by: Jerome Glisse <jglisse@redhat.com>