Because we can independently set either the device scale or the device
offset, we need to be careful and recompute the inverse rather than simply
assuming that the original contents of the device transform is identity.
Fixes regression https://bugs.launchpad.net/inkscape/+bug/234546.
For this we extend the boilerplate get_image() routines to extract a
single page out of a paginated document and then proceed to manually
check each page of the fallback-resolution test.
(Well that's the theory, in practice SVG doesn't support multiple pages
and so we just generate a new surface for each resolution. But the
infrastructure is in place so that we can automate other tests,
e.g. test/multi-pages.)
A few tests explicitly checked whether the "ps" or "svg" target was
enabled and this broke because of the name change. So fixup, to run
the generic test if either PS or SVG target is enabled as appropriate.
A little bit of sleep and reflection suggested that the use of
device_offset_[xy] was confusing and clone_offset_[xy] more consistent
with the function naming.
If the operator is unbounded, then its area of effect extends beyond
the definition of the mask by the trapezoids and so we must always perform
the image composition.
Fixes test/operator*.
In paint and show_glyphs, the compositing operator was not emitted at all.
In mask, the operator was also emitted for the mask itself, which is
wrong.
SVG clear and source differ from cairo as it also affects the
destination if the source pixel are completely transparent. We need to emit
an additional clip-to-self property.
Librsvg does not support clip-to-self, so it renders the SVG
test outputs incorrectly.
This patch also remove a lot of useless spaces in the style property
strings (I know, this should go in another commit).
Previously the rule for clone_similar() was that the returned surface
had exactly the same size as the original, but only the contents within
the region of interest needed to be copied. This caused failures for very
large images in the xlib-backend (see test/large-source).
The obvious solution to allow cloning only the region of interest seemed
to be to simply set the device offset on the cloned surface. However, this
fails as a) nothing respects the device offset on the surface at that
layer in the compositing stack and b) possibly returning references to the
original source surface provides further confusion by mixing in another
source of device offset.
The second method was to add extra out parameters so that the
device offset could be returned separately and, for example, mixed into
the pattern matrix. Not as elegant, a couple of extra warts to the
interface, but it works - one less XFAIL...
We use the full matrix in hash computation, but only compare the
non-translation items in equality check. This is no bug though,
as we set the ctm translation components of a scaled font to zero
explicitly. But the change makes the hash and equal functions
consistent, which is good.
One possibility for a read failure whilst converting the image is if the
external utility crashed. This information is important for the test suite
as knowing input that causes the converter to crash is just as vital as
identifying a crash within the library.
Since there is an implicit precedence in the ranking of the analysis
return codes, provide a function to centralize the logic within the
analysis surface and isolate the backends from the complexity.
Images in PDF are scaled to a unit square. In PS we set the
ImageMatrix to do the same. When the image is painted we scale the
graphics state to paint the image at the right size. In the case of
Type 3 fonts consisting of bitmap images we want to paint the images
at their original size so we scale the graphics state by the image
width and height.
The bug was that we were scaling by the width/height in the glyph
metrics. For non rotated fonts this worked. However for rotated fonts
the width/height of the glyph images may be larger than the
width/height in the glyph metrics. This resulted in a Type 3 font
where the glyph images were scaled slightly smaller than they should
have been.
Two changes here:
* Replace move_to;line_to;move_to;line_to sequences with
move_to;line_to;line_to when feasible.
* Close paths for round glyphs.
Both improve the stroke rendering of the joint.
The first change also saves 3 bytes per joint (33 such joints).
Which we have just left unused for now. To reclaim them one need
to update the charset table. Something for a lazy Sunday afternoon
scripting task.
In the saving department, we can save further by:
- Getting rid of the left/ascent/descent values as we compute
glyph bounding box automatically. Then we can liberally use
the right value to adjust glyph advance width. Saves three
bytes per glyph (there's 96 glyphs in the font).
- First operation is always a move_to. So we can remove the 'm'
for that. Ugly though.
And the charset has zeros for the first 32 entries. Can get rid of
that too at the expense of handling it in the code...
In total, combining the above we can save some 500 bytes. The font
currently takes about 3.7kb.
The font data and rendering is adapted from Keith Packard's Twin
window system. The hinting stuff is not ported yet, but hey, it renders!
The implementation uses user fonts, and the user font backend is modified
to use this font face (which we call "twin" font face internally) when
a toy font is needed.
The font face layer is then modified to use this font if:
- The toy font face "cairo" is asked for, or
- No native font backend is available, or
- The preferred native font backend fails to return a font with
STATUS_UNSUPPORTED. No font backend does this right now but
the idea is to change FreeType to return it if no fonts found
on the system.
We also allow building with no font backends now!
The new doc/tutorial/src/twin.c file tests the twin face at various
sizes.
The release notes for 1.7.6 say that we had dropped this
function, but apparently we had only planned to do that
and didn't actually get around to it until now.
Thanks to the RELEASING insctructions which gave a diff
command that pointed out this problem.
The problem showed up on OS X because the freetype backend reuses font_face_t's
which kept the reference count high enough for long enough to avoid the problem.
We reverted the public API for setting lcd_filter font options
back in 1b42bc8033 , but we had left the implementation which
would examine fontconfig and Xft properties for the option, and
which would call into freetype for subpixel glyph rasterization.
However, I recently realized, (and the test suite had been trying
to tell me for a while), that this approach would cause a
regression for users who were previously using sub-pixel text,
but without sub-pixel rendering built directly into freetype.
That's not acceptable, so all the code is coming out for now.
As opposed to 8.61.
I had actually forgotten we had documented this. If I had
remembered I could have put more meaningful messages with
some of my recent updates to ps-specific reference images.
It's sad to have to do this. Back with commit 1a9809baa was the
original fix for device-offset-scale, (right after the test was
added), and it fixed it for all backends, including SVG. The fix
involved combining device_transform and CTM into the pattern matrix.
But then, we added the mask-transformed-image and
mask-transformed-similar tests, and commit 20be3182ef for fixing an
SVG-specific bug with masks. That fix involved subtracting away the
pattern matrix when emitting a mask to adhere to SVG semantics.
Unfortunately, this change also made the device-offset-scale test
start failing. A correct fix would probably subtract away only the CTM
portion and not the devive_transform. However, the
_cairo_svg_surface_mask function sees only a pattern matrix and
doesn't know how to separate it into CTM and device_transform pieces.
So fixing this will probably require a change to the surface-backend
interface. And since we're not willing to do that so close to a major
release, we're adding yet another XFAIL.
Thanks to help from Chris, we fixed the bug that was making this
test fail with the PDF backend. All that was left was differing
treatment of the edges of the image---easy enough to address
with a pdf-specific reference image.
There is an implicit precedence when analyzing patterns for
compatibilty, in order of descending precedence:
fatal error
unsupported
needs image fallback
needs meta-surface analysis
success.
So wehen we have two patterns, we need to check both analysis statuses
simulataneously, in order to correctly report the combined status.
The primary bug here was some missing braces. The code was conditionally
assigning to backend_status, but then unconditionally checking for the
value assigned. The result was the leaking of an internal status value
(CAIRO_INT_STATUS_ANALYZE_META_SURFACE) which finally resulted in
an incomplete PDF file in the mask-transformed-similar test case.
While fixing this, also avoid re-using the backend_status variable so
much so that the code is more readable.
Since these functions are static we don't really need the full
name. And these two functions were both so long that they were
causing some serious line-wrap issues.