cairo/perf
Chris Wilson d64ef35521 Include cairo-perf in make check
Although cairo-perf is not written to perform explicit failure testing of
cairo, it does generate long sequences of cairo operations which often
trigger unexpected errors. By including it with make check, it becomes
even easier for the programmer to check that one has not broken cairo in
terms of expected behaviour or performance.
2007-04-19 23:12:09 +01:00
..
.gitignore Add callgrind output files to CLEANFILES and .gitignore. 2007-04-19 21:46:00 +01:00
box-outline.c perf: Add box_outline test case. 2006-11-17 18:00:36 -08:00
cairo-perf-cover.c perf: Add linear and radial gradients to the coverage 2006-10-04 17:14:12 -07:00
cairo-perf-diff [cairo-perf-diff] Update usage screen to mention --html. 2007-03-15 12:47:41 +01:00
cairo-perf-diff-files.c cairo-perf-diff: Use median not minimum to report differences 2007-04-13 13:36:49 -07:00
cairo-perf-posix.c [cairo-perf] Use full 64 bit of the clock cycle counters to avoid overflows 2007-03-24 00:22:37 +01:00
cairo-perf-win32.c perf: Rename finalize to synchronize as it is used in both start() and stop() now 2006-10-05 15:14:14 -07:00
cairo-perf.c [boilerplate] Add cairo_boilerplate_get/free_targets 2007-04-18 19:46:30 -04:00
cairo-perf.h Add "rectangles" perf test 2007-01-31 11:53:06 -08:00
cairo-stats.c Fix typo in loop control for computation of std. deviation 2007-04-12 18:16:52 -07:00
cairo-stats.h perf: Eliminate CAIRO_STATS_MIN_VALID_SAMPLES 2006-11-08 06:07:01 -08:00
fill.c perf: Add stroke and fill tests 2006-10-04 17:14:12 -07:00
long-lines.c Add long-lines perf case 2007-01-05 16:50:11 -08:00
make-html.py Transform the output of cairo-perf-diff into HTML 2007-03-14 21:41:40 +01:00
Makefile.am Include cairo-perf in make check 2007-04-19 23:12:09 +01:00
Makefile.win32 [perf] Change perf output format, report times in ms, add a few paint tests 2006-09-19 12:19:20 -07:00
mosaic.c New performance test case "mosaic" for splines. 2006-12-16 21:32:19 +02:00
mosaic.h New performance test case "mosaic" for splines. 2006-12-16 21:32:19 +02:00
paint.c perf: Move iteration over sources and operators from paint to new cairo-perf-cover 2006-10-04 17:14:12 -07:00
pattern_create_radial.c Add new perf test "pattern_create_radial" 2006-11-02 10:38:00 -08:00
README perf/README: Add notes on using cairo-perf-diff 2007-03-12 16:24:58 -07:00
rectangles.c Add "rectangles" perf test 2007-01-31 11:53:06 -08:00
stroke.c perf: Add stroke and fill tests 2006-10-04 17:14:12 -07:00
subimage_copy.c perf: Add subimage_copy test to demonstrate performance bug found by monty 2006-10-05 12:31:50 -07:00
tessellate.c perf: Make cairo_t* available to perf functions 2006-10-04 17:14:11 -07:00
text.c Check for error whilst trying to advance along a text string. 2007-04-09 16:13:16 -07:00
unaligned-clip.c Add unaligned_clip perf case courtesy of Jeff Muizelaar 2007-01-09 14:31:22 -08:00
world-map.c perf: Add world-map performance test case. 2006-11-08 06:07:01 -08:00
world-map.h perf: Add world-map performance test case. 2006-11-08 06:07:01 -08:00
zrusin-another.h Add zrusin-another test cases (tessellate and fill). 2006-11-06 23:56:19 -08:00
zrusin.c Add zrusin-another test cases (tessellate and fill). 2006-11-06 23:56:19 -08:00

This is cairo's performance test suite.

One of the simplest ways to run the performance suite is:

    make perf

while will give a report of the speed of each indivudual test. See
more details on other options for running the suite below.

Running the cairo performance suite
-----------------------------------
The lowest-level means of running the test suite is with the
cairo-perf program, (which is what "make perf" executes). Some
examples of running it:

    # Report on all tests with default number of iterations:
    ./cairo-perf

    # Report on 100 iterations of all gradient tests:
    ./cairo-perf -i 100 gradient

    # Generate raw results for 10 iterations into cairo.perf
    ./cairo-perf -r -i 10 > cairo.perf
    # Append 10 more iterations of the paint test
    ./cairo-perf -r -i 10 paint >> cairo.perf

Raw results aren't useful for reading directly, but are quite useful
when using cairo-perf-diff to compare spearate runs (see more
below). The advantage of using the raw mode is that test runs can be
generated incrementally and appended to existing reports.

Generating comparisons of separate runs
---------------------------------------
It's often useful to generate a chart showing the comparison of two
separate runs of the cairo performance suite, (for example, after
applying a patch intended to improve cairo's performance). The
cairo-perf-diff script can be used to compare two report files
generated by cairo-perf.

Again, by way of example:

    # Show performance changes from cairo-orig.perf to cairo-patched.perf
    ./cairo-perf-diff cairo-orig.perf cairo-patched.perf

This will work whether the data files were generate in raw mode (with
cairo-perf -r) or cooked, (cairo-perf without -r).

Finally, in its most powerful mode, cairo-perf-diff accepts two git
revisions and will do all the work of checking each revision out,
building it, running cairo-perf for each revision, and finally
generating the report. Obviously, this mode only works if you are
using cairo within a git repository, (and not from a tar file). Using
this mode is as simple as passing the git revisions to be compared to
cairo-perf-diff:

    # Compare cairo 1.2.6 to cairo 1.4.0
    ./cairo-perf-diff 1.2.6 1.4.0

    # Measure the impact of the latest commit
    ./cairo-perf-diff HEAD~1 HEAD

As a convenience, this common desire to measure a single commit is
supported by passing a single revision to cairo-perf-diff, in which
case it will compare it to the immediately preceeding commit. So for
example:

    # Measure the impact of the latest commit
    ./cairo-perf-diff HEAD

    # Measure the impact of an arbitrary commit by SHA-1
    ./cairo-perf-diff aa883123d2af90

Also, when passing git revisions to cairo-perf-diff like this, it will
automatically cache results and re-use them rather than re-rerunning
cairo-perf over and over on the same versions. This means that if you
ask for a report that you've generated in the past, cairo-perf-diff
should return it immediately.

Now, sometimes it is desirable to generate more iterations rather than
re-using cached results. In this case, the -f flag can be used to
force cairo-perf-diff to generate additional results in addition to
what has been cached:

    # Measure the impact of latest commit (force more measurement)
    ./cairo-perf-diff -f

And finally, the -f mode is most useful in conjunction with the --
option to cairo-perf-diff which allows you to pass options to the
underlying cairo-perf runs. This allows you to restrict the additonal
test runs to a limited subset of the tests.

For example, a frequently used trick is to first generate a chart with
a very small number of iterations for all tests:

    ./cairo-perf-diff HEAD

Then, if any of the results look suspicious, (say there's a slowdown
reported in the text tests, but you think the text test shouldn't be
affected), then you can force more iterations to be tested for only
those tests:

    ./cairo-perf-diff -f HEAD -- text

Creating a new performance test
-------------------------------
This is where we could use everybody's help. If you have encountered a
sequence of cairo operations that are slower than you would like, then
please provide a performance test. Writing a test is very simple, it
requires you to write only a small C file with a couple of functions,
one of which exercises the cairo calls of interest.

Here is the basic structure of a performance test file:

    /* Copyright © 2006 Kind Cairo User
     *
     * ... Licensing information here ...
     * Please copy the MIT blurb as in other tests
     */

    #include "cairo-perf.h"

    static cairo_perf_ticks_t
    do_my_new_test (cairo_t *cr, int width, int height)
    {
	cairo_perf_timer_start ();

	/* Make the cairo calls to be measured */

	cairo_perf_timer_stop ();

	return cairo_perf_timer_elapsed ();
    }

    void
    my_new_test (cairo_perf_t *perf, cairo_t *cr, int width, int height)
    {
	/* First do any setup for which the execution time should not
	 * be measured. For example, this might include loading
	 * images from disk, creating patterns, etc. */

	/* Then launch the actual performance testing. */
	cairo_perf_run (perf, "my_new_test", do_my_new_test);

	/* Finally, perform any cleanup from the setup above. */
    }

That's really all there is to writing a new test. The first function
above is the one that does the real work and returns a timing
number. The second function is the one that will be called by the
performance test rig (see below for how to accomplish that), and
allows for multiple performance cases to be written in one file,
(simply call cairo_perf_run once for each case, passing the
appropriate callback function to each).

We go through this dance of indirectly calling your own function
through cairo_perf_run so that cairo_perf_run can call your function
many times and measure statistical properties over the many runs.

Finally, to fully integrate your new test case you just need to add
your new test to three different lists. (TODO: We should set this up
better so that the lists are maintained automatically---computed from
the list of files in cairo/perf, for example). Here's what needs to be
added:

 1. Makefile.am: Add the new file name to the cairo_perf_SOURCES list

 2. cairo-perf.h: Add a new CAIRO_PERF_DECL line with the name of your
    function, (my_new_test in the example above)

 3. cairo-perf.c: Add a new row to the list at the end of the file. A
    typical entry would look like:

	{ my_new_test, 16, 64 }

    The last two numbers are a minimum and a maximum image size at
    which your test should be exercised. If these values are the same,
    then only that size will be used. If they are different, then
    intermediate sizes will be used by doubling. So in the example
    above, three tests would be performed at sizes of 16x16, 32x32 and
    64x64.

Thanks for your contributions and have fun with cairo!