These reference images are generated by the new GENERATE_REFERENCE mode that the
previous commit introduced.
I have no idea what the "base" images. From my reading of the code in
boilerplate/, these images will be used by the test-XXX targets. However, these
seem to generate the same result than e.g. the image backend. Thus, I deleted
these files.
There is still pthread-same-source.quartz.xfail.png. This file was created in
commit b6e16b8d and touched in commit 5a1e590b1. No idea if this is still valid
and since I don't have a Mac, I won't touch it.
The test is still broken on the following backends (out of the backends I have
compiled in). This mostly seems to be differences in image scaling, but I
couldn't figure out an easy way to tell the test suite that the new results are
correct.
test-paginated, ps2, ps3, xcb, xcb-window, xcb-window&, xcb-fallback, xlib,
xlib-window, xlib-fallback, recording
Signed-off-by: Uli Schlachter <psychon@znc.in>
Add test case for https://bugs.freedesktop.org/show_bug.cgi?id=68382
Something has regressed in the recording surface. All the recording
surface based backends lose the alpha from the paint_With_alpha.
The _cairo_color_double_to_short() function converts a double
precision floating point value in the range of [0.0, 1.0] to a
uint16_t integer by dividing the [0.0, 1.0] range into 65536
equal-sized intervals and then associating each interval with an
integer.
Under the assumption that an integer i corresponds to the real value i
/ 65535.0 this algorithm introduces more error than necessary as can
be seen from the following picture showing the analogous
transformation for two-bit integers:
+-----------+-----------+-----------+-----------+
0b00 | 0b01 | 0b10 | 0b11
+-----------+-----------+-----------+-----------+
which shows that some floating point values are not converted to the
integer that would minimize the error in value that that integer
corresponds to.
Instead, this patch uses standard rounding, which makes the diagram
look like this:
+-------+---------------+---------------+-------+
0b00 | 0b01 | 0b10 | 0b11
+-------+---------------+---------------+-------+
It's clear that if the values corresponding to the given integers are
fixed, then it's not possible to decrease the resulting error by
moving any of the interval boundaries.
See this thread for more information:
http://lists.freedesktop.org/archives/cairo/2013-October/024691.html
Reference images updated:
pthread-similar.ref.png
record-paint-alpha.ref.png
record90-paint-alpha.argb32.ref
record90-paint-alpha.rgb24.ref.png
xcb-huge-image-shm.ref.png
xcb-huge-subimage.ref.png
All of these have only one-step differences to the old images.
Also update the image.arg32 reference images, since for now we're just
accepting pixman's output as truth. This fixes up several tests:
was is
Tests run: 420 420
Passed: 224 261
Failed: 195 159
Expected Failed: 0 0
Error: 0 0
Crashed: 0 0
Untested: 0 0
Total: 420 420
Thanks to psychon for finding the code error in the test.
This adds testcases for the various cairo filter options, each of which
match to corresponding pixman filters. Use the 'downscale' keyword if
invoking tests using cairo-test-suite.
The 24-pixel reference images were produced from quad-color.png using
Gimp's Scale Image command with Interpolation set to None. It is
assumed that all filters should handle a 1:4 scaling cleanly with no
antialiased blurring.
The 95-pixel reference images assume differing types of antialiasing
based on the quality level. We are using the image.argb32 output as
reference here. Potentially some other rendering algorithm could
conceivably provide better results in the future.
The 96-pixel reference images are simply copies of the original
quad-color.png file. It is assumed that 1:1 downscaling operations
should produce no visible change to the original image.
Signed-off-by: Bryce Harrington <b.harrington@samsung.com>
Downscaling from 96 to 24 is easy since it's an even multiple, so try
scaling by -1 pixel too.
This adds a 1:1 scaling test case as well, which should pass through the
image unchanged.
Signed-off-by: Bryce Harrington <b.harrington@samsung.com>
This adds pixman-downscale.c, which tests correctness of PNG images
scaled down using pixman routines.
Signed-off-by: Bryce Harrington <b.harrington@samsung.com>
Commit d7f5a1bec fixed a bug. This caused 12 new test failures for the
test-traps test target:
caps-tails-curve degenerate-arc degenerate-path joins subsurface
subsurface-scale twin twin-antialias-gray twin-antialias-mixed
twin-antialias-none twin-antialias-subpixel user-font
Most of these are indeed (new?) bugs. However, caps-tails-curve actually started
producing the expected result and the reference image just wrongly captures the
old state of things.
At the time of that commit, just taking the output from test-traps as the new
reference image works fine for all backends. However, with current git,
something introduced more antialiasing noise and now test-traps changed again
while cairo-xcb stayed with the old result. Thus, we also need a new reference
image to fix this test.
(The wrong reference images come from commit 8488ae02 which turned test-traps'
results into reference images)
Signed-off-by: Uli Schlachter <psychon@znc.in>
Where a content specific reference image exists, prefer to have both
content reference images (i.e. both argb32.ref and rgb24.ref) rather
than a mix of .ref and argb32/rgb24.
Courtesy of the improved check-ref-dups written by Bryce Harrington:
Running make check on the codebase (with default configuration) with the
redundant images removed produces essentially the same test results:
Before
------
Tests run: 13687
Passed: 9216
Failed: 3566
Expected Failed: 312
Error: 1
Crashed: 17
Untested: 575
Total: 13687
After
-----
Tests run: 13689
Passed: 9216
Failed: 3566
Expected Failed: 312
Error: 1
Crashed: 19
Untested: 575
Total: 13689
(with the exception being the pthread tests misbehaving between runs)
A side effect of
commit c986a7310b
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Jan 24 08:55:54 2013 +0000
image: Enable inplace compositing with opacities for general routines
is that we should in theory be reducing the rounding errors when
compositing coverage.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
This creates an image surface with a non-natural stride and paints it to a
similar surface.
In the xcb backend, this causes a call to _cairo_xcb_connection_put_subimage()
which tries to send a huge PutImage request. As a result, xcb kills the X11
connection.
Signed-off-by: Uli Schlachter <psychon@znc.in>
In order to prevent underflow when searching for the closing pen vertex,
we first need to be sure that it does not simply lie next to the opening
pen vertex. As a result we were missing many cases that should have been
a bevel (in == out) and generating almost complete round caps instead.
Reported-by: Dominik Röttsches <dominik.rottsches@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=56432
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Rather than using the traps reference for all target as this then
generates false negatives with the spans compositor.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
From https://bugs.freedesktop.org/show_bug.cgi?id=53841:
"Rectangle drawn incorrectly when it has zero height
and miter limit greater than 1.414"
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Carl Worth demonstrated a glaring bug in the new stroking code,
introduced in commit 545f30856a (stroke: Convert the outlines
into contour and then into a polygon), whereby only a bevel join was
being used to connect segments around a sharp inflection point.
This adds the two examples he reported to the test suite.
An overzealous update after converting antialiasing missed the object of
this test was exactly to point out an error due to the antialiasing. So
restore it back to the prestine reference and mark the image backend as
failing.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Apparently this too suffered from bug-bo-collins and is fixed by
(bo-rectangular: Emit subsummed boxes for overlapping edges).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If the join indicates the pair of edges are parallel, we may be
considering the final segment of the spline with a different tangent
vector than the slope of the final edge and so lead to false dropping of
an edge. This has the effect that the line segments between 'arc arc arc
arc' (a rounded rectangle) are no longer horizontal or vertical. As path
construction tries to eliminate joins between colinear segments, this
optimisation should not be required anyway.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As we convert the unaligned clip boxes to a region, we need to process
the intersection of the boxes with the clip surface as a separate step.
Fixes tighten-box for the base compositor.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Uli Schlachter spotted that I had inadvertently committed (606e9e1c9) a
broken set of test images for the tighten-bounds case and so masked a
nasty bug with the mishandling of unaligned clips.
Reported-by: Uli Schlachter <psychon@znc.in>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Despite copying across the font options from the PDF backend, it still
looks like the image surface is override the glyph placement... Odd.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reducing the number of passes has the usual change in the antialiasing
side-effects, as well as the boon of being faster (and theorectically more
accurate through reduced loss of dynamic range.)
On an i5-2520m:
swfdec-giant-steps-full 3240.43 -> 2651.36: 1.22x speedup
grads-heat-map 166.84 -> 136.79: 1.22x speedup
swfdec-giant-steps 940.19 -> 796.24: 1.18x speedup
ocitysmap 953.51 -> 831.96: 1.15x speedup
webkit-canvas-alpha 13924.01 -> 13115.70: 1.06x speedup
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>