radv: only enable shaderInt16 on GFX9+ and LLVM7+

The throughput is similar to 32-bit integers on GFX8 and
AMDVLK does not expose 16-bit integers on pre Vega as well.
On GFX9+, only LLVM 7+ has support.

This fixes a bunch of CTS crashes on GFX9/LLVM 6.

Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
This commit is contained in:
Samuel Pitoiset 2018-09-20 22:17:03 +02:00
parent 945e9cdb2b
commit 674fcfaecc

View file

@ -763,7 +763,7 @@ void radv_GetPhysicalDeviceFeatures(
.shaderCullDistance = true,
.shaderFloat64 = true,
.shaderInt64 = true,
.shaderInt16 = true,
.shaderInt16 = pdevice->rad_info.chip_class >= GFX9 && HAVE_LLVM >= 0x700,
.sparseBinding = true,
.variableMultisampleRate = true,
.inheritedQueries = true,