The Cell stencil tests were completely ignoring the stencil value mask.
Now the original code paths are still used if the stencil value mask
is all 1s; but code to use the mask for the stencil value and reference
value comparisons is now emitted if the mask is not all 1s.
Two definitive bugs in stenciling were fixed.
The first, reversed registers in the generated Select Bytes (selb)
instruction, caused the stenciling INCR and DECR operations to
fail dramatically, putting new values in where old values were
supposed to be and vice versa.
The second caused stencil tiles to not be read and written from
main memory by the SPUs. A per-spu flag, spu.read_depth, was used
to indicate whether the SPU should be reading depth tiles, and was set
only when depth was enabled. A second flag, spu.read_stencil, was
set when stenciling was enabled, but never referenced.
As stenciling and depth are in the same tiles on the Cell, and there
is no corresponding TAG_WRITE_TILE_STENCIL to complement
TAG_WRITE_TILE_COLOR and TAG_WRITE_TILE_Z, I fixed this by
eliminating the unused "spu.read_stencil", renaming "spu.read_depth"
to "spu.read_depth_stencil", and setting it if either stenciling or
depth is enabled.
I also added an optimization to the fragment ops generation code,
that avoids calculating stencil values and/or stencil writemask
when the stencil operations are all KEEP.
Was 32, now 5. The param is expressed as a power of two exponent.
The net effect is that the alignment was a no-op on X86 but on PPC we
always got the same memory address everytime rtasm_exec_malloc() was called.
Was 32, now 5. The param is expressed as a power of two exponent.
The net effect is that the alignment was a no-op on X86 but on PPC we
always got the same memory address everytime rtasm_exec_malloc() was called.
Previously, since my check_aperture API change, we would check each piece of
state against the batchbuffer individually, but not all the state against the
batchbuffer at once. In addition to not being terribly useful in assuring
success, it probably also increased CPU load by calling check_aperture many
times per primitive.
This avoids issues with dereferencing stale cliprects around intel_draw_buffer
time. Additionally, take advantage of cliprects staying constant for FBOs and
DRI2, and emit cliprects in the batchbuffer instead of having to flush batch
each time they change.