Handling clip as part of the surface state, as opposed to being part of
the operation state, is cumbersome and a hindrance to providing true proxy
surface support. For example, the clip must be copied from the surface
onto the fallback image, but this was forgotten causing undue hassle in
each backend. Another example is the contortion the meta surface
endures to ensure the clip is correctly recorded. By contrast passing the
clip along with the operation is quite simple and enables us to write
generic handlers for providing surface wrappers. (And in the future, we
should be able to write more esoteric wrappers, e.g. automatic 2x FSAA,
trivially.)
In brief, instead of the surface automatically applying the clip before
calling the backend, the backend can call into a generic helper to apply
clipping. For raster surfaces, clip regions are handled automatically as
part of the composite interface. For vector surfaces, a clip helper is
introduced to replay and callback into an intersect_clip_path() function
as necessary.
Whilst this is not primarily a performance related change (the change
should just move the computation of the clip from the moment it is applied
by the user to the moment it is required by the backend), it is important
to track any potential regression:
ppc:
Speedups
========
image-rgba evolution-20090607-0 1026085.22 0.18% -> 672972.07 0.77%: 1.52x speedup
▌
image-rgba evolution-20090618-0 680579.98 0.12% -> 573237.66 0.16%: 1.19x speedup
▎
image-rgba swfdec-fill-rate-4xaa-0 460296.92 0.36% -> 407464.63 0.42%: 1.13x speedup
▏
image-rgba swfdec-fill-rate-2xaa-0 128431.95 0.47% -> 115051.86 0.42%: 1.12x speedup
▏
Slowdowns
=========
image-rgba firefox-periodic-table-0 56837.61 0.78% -> 66055.17 3.20%: 1.09x slowdown
▏
A surface will have the chance to use span rendering at cairo_fill()
time by creating a renderer for a specific combination of
pattern/dst/op before the path is scan converted. The protocol is to
first call check_span_renderer() to see if the surface wants to render
with spans and then later call create_span_renderer() to create the
renderer for real once the extents of the path are known.
No backends have an implementation yet.
This patch introduces three macros: _cairo_malloc_ab,
_cairo_malloc_abc, _cairo_malloc_ab_plus_c and replaces various calls
to malloc(a*b), malloc(a*b*c), and malloc(a*b+c) with them. The macros
return NULL if int overflow would occur during the allocation. See
CODING_STYLE for more information.
This is necessary to avoid many portability problems as cairoint.h includes
config.h. Without a test, we will regress again, hence add it.
The inclusion idiom for cairo now is:
#include "cairoint.h"
#include "cairo-something.h"
#include "cairo-anotherthing-private.h"
#include <some-library.h>
#include <other-library/other-file.h>
Moreover, some standard headers files are included from cairoint.h and need
not be included again.
Add notes for the new BeOS backend
Add disabled by default BeOS backend
Add BeOS files
New
New
BEOS_SURFACE_FEATURE
BeOS mutex functions
Ignore files generates by the BeOS tests
Add cairo-test-beos.*
New.
Test BeOS backend.