The pattern could be stack allocated so we can't take a reference to it.
Some testing of quartz shows that it doesn't deal with malloc failure particularily
well. In the best case CGFunctionCreate returns NULL, in the worst case it just crashes.
Quartz does seem to be able to handle a NULL CGFunctionRef, so returning NULL if
we fail to copy the pattern avoids complicating the code to deal with
propagating the failure and shouldn't cause any additional crashes.
Based on a patch by Paolo Bonzini.
As part of this avoid using cairo_pattern_get_matrix() because it requires a
'cairo_pattern_t *' instead of 'const cairo_pattern *'
Also, make a copy of the pattern before pasing it in to cairo_set_source()
Repeat should be handled using MOD instead of '%' so that negative numbers
are handled as expected. E.g. -1 mod 600 = 599, not 495 as the '%' operator
gives. This was causing https://bugzilla.mozilla.org/show_bug.cgi?id=466258
Patch from Robert O'Callahan
This speeds up the mask generation step in cairo_fill() for the image
surface by up to 10x in especially favourable cases.
image-rgba twin-800 7757.80 0.20% -> 749.41 0.29%: 10.36x speedup
image-rgba spiral-diag-pixalign-nonzero-fill-512 15.16 0.44% -> 3.45 8.80%: 5.54x speedup
More typical simple non-rectilinear geometries are sped up by 30-50%.
This patch does not affect any stroking operations or any fill
operations of pixel aligned rectilinear geometries; those are still
rendered using trapezoids.
This implementation first produces an A8 alpha mask and then
pixman_image_composites the result to the destination with the source.
Clipping is handled by pixman when it is region clipping or by
cairo-surface-fallback when it is something more complex.
Imports a new polygon scan converter implementation from the
repository at
http://cgit.freedesktop.org/~joonas/glitter-paths/
Glitter paths is a stand alone polygon rasteriser derived from David
Turner's reimplementation of Tor Anderssons's 15x17 supersampling
rasteriser from the Apparition graphics library. The main new feature
in this implementation is cheaply choosing per-scan line between doing
fully analytical coverage computation for an entire row at a time
vs. using a supersampling approach.
A surface will have the chance to use span rendering at cairo_fill()
time by creating a renderer for a specific combination of
pattern/dst/op before the path is scan converted. The protocol is to
first call check_span_renderer() to see if the surface wants to render
with spans and then later call create_span_renderer() to create the
renderer for real once the extents of the path are known.
No backends have an implementation yet.
A cairo_span_renderer_t implementation can be provided by a surface if
it wants to render paths as horizontal spans of the alpha component of
a mask. Its job is to composite a source pattern to the destination
surface when given spans of alpha coverage for a row while taking care
of backend specific clipping.
A cairo_scan_converter_t takes edges of a flattened path and generates
spans for a span renderer to render.
A cairo_composite_rectangles_t contains the coordinates of rectangular
windows into each of the source pattern, mask, clip and destination
surface containing the pixels that will combine in a compositing
operation. The idea is to have a uniform way to represent all the
translations involved rather than overloading parameters like src_x/y,
dst_x/y, etc., sometimes with different incompatible meanings across
functions.
We want to hit the current fast paths for rendering axis aligned
rectilinear paths rather than spans, and for that we need to be able
to identify regional paths.
Repeat should be handled using MOD instead of '%' so that negative numbers
are handled as expected. E.g. -1 mod 600 = 599, not 495 as the '%' operator
gives. This was causing https://bugzilla.mozilla.org/show_bug.cgi?id=466258
Patch from Robert O'Callahan
Peter Hercek reported, and provided a very useful test case for, a bug
that caused his applications to crash with Cairo detecting an
non-invertible pattern matrix and thus asserting the impossible happened.
Bisecting revealed that the bug first appeared with 3c18d95 and
disappeared with 0d0c6a1. Since neither of these explain the crash,
further investigation revealed a compiler bug (gcc 4.3.3 20081130,
earlier versions have different bugs!) that caused the matrix inversion
to be invalid iff _cairo_matrix_scalar_multiply() was inlined (i.e. -O0,
or an explicit noinline atttribute on that function prevented the bug, as
did -msse.) So we apply this workaround to hide the bug in the stable
series...
The matrix is quite often just a simple scale and translate (or even
identity!). For this class of matrix, we can skip the full adjoint
rearrangement and determinant calculation and just compute the inverse
directly.
(cherry picked from commit 0d0c6a199c)
Otherwise if underlying glitz drawable has an alpha channel, glitz_set_rectangles
will set its alpha channel to specified value instead of opaque one and effects following
composite operations since glitz draws to attached drawable then copies its content to
the dst surface. With this commit, three test cases such as operator, operator-alpha and
unbounded-operator passes now.
Use unsigned long in the first place to prevent compiler from
expanding signed bit to all upper bits. e.g, a alpha mask 0xff0000
will expand to 0xffffffffff00000 on 64 bit platform which is not
what we expected.
Clone similar open-coded various image surface functions and failed to
clone a sub-region resulting in failures for mask-transformed-* and
large-source.
Ensure we do not loop forever trying to minimise the error between the
pixman and cairo matrices - for instance when the FPU is not running at
full precision.
Adrian Johnson reported that cygwin complained about the use of the void *
within feof() as it was using a macro and attempted a to deference the
void*...
When computing the bounds of the clip path, we care more for a fast result
than absolute precision as the extents are only used as a guide to trim
the future operations. So computing the extents of the path suffices.