mirror of
https://gitlab.freedesktop.org/mesa/mesa.git
synced 2026-05-07 15:48:36 +02:00
mesa: Use GLdouble for depthMax in final unpack conversions.
The final step of _mesa_unpack_depth_span is to take the temporary
GLfloat depth values and convert them to the desired format. When
converting to GL_UNSIGNED_INTEGER with depthMax > 0xffffff, we use
double-precision math to avoid overflow and precision problems.
Or at least that's the idea. Unfortunately
GLdouble z = depthValues[i] * (GLfloat) depthMax;
actually causes single-precision multiplication, since both operands are
GLfloats. Casting depthMax to GLdouble causes the scaling to be done
with double-precision math.
Fixes a regression in oglconform's depth-stencil basic.read.ds test
since c60ac7b179, where the expected and
actual values differed slightly. For example, 0xcfa7a6 vs. 0xcfa7a4.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=49772
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
This commit is contained in:
parent
36fe8a5b7f
commit
57295009e8
1 changed files with 1 additions and 1 deletions
|
|
@ -4900,7 +4900,7 @@ _mesa_unpack_depth_span( struct gl_context *ctx, GLuint n,
|
|||
else {
|
||||
/* need to use double precision to prevent overflow problems */
|
||||
for (i = 0; i < n; i++) {
|
||||
GLdouble z = depthValues[i] * (GLfloat) depthMax;
|
||||
GLdouble z = depthValues[i] * (GLdouble) depthMax;
|
||||
if (z >= (GLdouble) 0xffffffff)
|
||||
zValues[i] = 0xffffffff;
|
||||
else
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue