This allows us to first generate atomic operations for shared
variables using these opcodes, and then later we can lower those to
the shared atomics intrinsics with nir_lower_io.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Previously we were receiving shared variable accesses via a lowered
intrinsic function from glsl. This change allows us to send in
variables instead. For example, when converting from SPIR-V.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Phi handling is somewhat intrinsically tied to the CFG. Moving it here
makes it a bit easier to handle that. In particular, we can now do SSA
repair after we've done the phi node second-pass. This fixes 6 CTS tests.
This is a port of Matt's GLSL IR lowering pass to NIR. It's required
because we translate SPIR-V directly to NIR, bypassing GLSL IR.
I haven't introduced a lower_ldexp flag, as I believe all current NIR
consumers would set the flag. i965 wants this, vc4 doesn't implement
this feature, and st_glsl_to_tgsi currently lowers ldexp
unconditionally anyway.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
struct.pack('i', val) interprets `val` as a signed integer, and dies
if `val` > INT_MAX. For larger constants, we need to use 'I' which
interprets it as an unsigned value.
This patch makes us use 'I' for all values >= 0, and 'i' for negative
values. This should work in all cases.
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
This is untested and probably broken.
We already passed the atan2 CTS tests before implementing this opcode.
Presumably, glslang or something was giving us a plain Atan opcode
instead of Atan2. I don't know why.
SPIR-V only has one ImageQuerySize opcode that has to work for both
textures and storage images. Therefore, we have to special-case that one a
bit and look at the type of the incoming image handle.
Image intrinsics always take a vec4 coordinate and always return a vec4.
This simplifies the intrinsics a but but also means that they don't
actually match the incomming SPIR-V. In order to compensate for this, we
add swizzling movs for both source and destination to get the right number
of components.
The Vulkan spec requires all allocations that happen for device creation to
happen with SCOPE_DEVICE. Since meta calls into other things that allocate
memory, the easiest way to do this is with an allocator.