OpenGL allows mixing and matching depth and stencil renderbuffers in
framebuffer objects while the hardware really only supports interleaved
depth/stencil buffers. This makes for some tricky buffer management.
An extra wrinkle is the situation where the user allocates a 16bpp depth
texture or renderbuffer then tries to render to it along with a stencil
buffer. We'd have to promote the 16bpp Z values to 24-bit Z values and
mix in the stencil values to setup the depth/stencil renderbuffer.
There's no support for that now, so always allocate 32bpp depth textures/
renderbuffers for now.
Previously MaxTextureUnits was used to validate both texture image
units and texture coordinate units in fragment programs. Instead, use
MaxTextureCoordUnits for texture coordinate units and
MaxTextureImageUnits for texture image units.
Fixes bugzilla #19468.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Copied language from the glXSwapBuffers manual page about the implicit
glFlush and expected command completion. This just codifies what
people already expect from glXCopySubBufferMESA. The intention of
this command is to work like glXSwapBuffers but on a sub-rectangle of
the drawable.
Acked-by: Brian Paul <brianp@vmware.com>
When we use the do_blit_bitmap() function, it seems the fragment Z is always
1.0. If depth testing is on, that means that bitmap fragments are often
occluded by other rendering. So, the bitmap doesn't appear even if
rasterpos.Z==0.
The fix is to use the intel_texture_bitmap() path when depth testing is on.
Also, fix the incorrect Z coordinate. It needs to be an NDC value in [-1,1].
Just call _mesa_append_fog_code() if the fragment program's FogOption is
not GL_NONE.
This allows us to remove some unnecessary i965 fog code.
Note, the arbfplight.c demo can be used to test this (see DO_FRAGMENT_FOG).