mirror of
https://gitlab.freedesktop.org/mesa/mesa.git
synced 2026-05-02 18:48:28 +02:00
nir/format_convert: remove unorm bit size assert
Yes, we're losing precision if this assert fails and it's wrong. It's also
necessary to implement GL in a reasonable way on Asahi. Remove the assert that
was recently added and add more comment context on the mess.
Fixes debug build regression on asahi:
dEQP-GLES3.functional.vertex_arrays.single_attribute.normalize.int.components4_quads1
Fixes: 22f1b04a99 ("nir/format_convert: Assert that UNORM formats are <= 16 bits")
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Suggested-by: Faith Ekstrand <faith.ekstrand@collabora.com>
Acked-by: Boris Brezillon <boris.brezillon@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/29820>
This commit is contained in:
parent
1ff86021a7
commit
535823682d
1 changed files with 7 additions and 1 deletions
|
|
@ -202,8 +202,14 @@ _nir_format_norm_factor(nir_builder *b, const unsigned *bits,
|
|||
/* A 16-bit float only has 23 bits of mantissa. This isn't enough to
|
||||
* convert 24 or 32-bit UNORM/SNORM accurately. For that, we would need
|
||||
* fp64 or some sort of fixed-point math.
|
||||
*
|
||||
* Unfortunately, GL is silly and includes 32-bit normalized vertex
|
||||
* formats even though you're guaranteed to lose precision. Those formats
|
||||
* are broken by design, but we do need to support them with the
|
||||
* bugginess, and the loss of precision here is acceptable for GL. This
|
||||
* helper is used for the vertex format conversion on Asahi, so we can't
|
||||
* assert(bits[i] <= 16). But if it's not, you get to pick up the pieces.
|
||||
*/
|
||||
assert(bits[i] <= 16);
|
||||
factor[i].f32 = (1ull << (bits[i] - is_signed)) - 1;
|
||||
}
|
||||
return nir_build_imm(b, num_components, 32, factor);
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue