mirror of
https://gitlab.freedesktop.org/cairo/cairo.git
synced 2025-12-20 04:40:07 +01:00
Run the command below suggested by geirha in ##sed@irc.freenode.net. git grep -l 'http://.*freedesktop.org' | xargs sed -i 's|http\(://\([[:alnum:].-]*\.\)\{0,1\}freedesktop\.org\)|https\1|g' Signed-off-by: Paul Menzel <pmenzel@molgen.mpg.de>
239 lines
9.1 KiB
Text
239 lines
9.1 KiB
Text
This is cairo's micro-benchmark performance test suite.
|
|
|
|
One of the simplest ways to run this performance suite is:
|
|
|
|
make perf
|
|
|
|
which will give a report of the speed of each individual test. See
|
|
more details on other options for running the suite below.
|
|
|
|
A macro test suite (with full traces and more intensive benchmarks) is
|
|
also available; for this, see https://cgit.freedesktop.org/cairo-traces.
|
|
The macro-benchmarks are better measures of actual real-world
|
|
performance, and should be preferred over the micro-benchmarks (and over
|
|
make perf) for identifying performance regressions or improvements. If
|
|
you copy or symlink this repository at cairo/perf/cairo-traces, then
|
|
make perf will run those tests as well.
|
|
|
|
Running the micro-benchmarks
|
|
----------------------------
|
|
The micro-benchmark performance suite is composed of a series of
|
|
hand-written, short, synthetic tests that measure the speed of doing a
|
|
simple operation such as painting a surface or showing glyphs. These aim
|
|
to give very good feedback on whether a performance related patch is
|
|
successful without causing any performance degradations elsewhere.
|
|
|
|
The micro-benchmarks are compiled into a single executable called
|
|
cairo-perf-micro, which is what "make perf" executes. Some
|
|
examples of running it:
|
|
|
|
# Report on all tests with default number of iterations:
|
|
./cairo-perf-micro
|
|
|
|
# Report on 100 iterations of all gradient tests:
|
|
./cairo-perf-micro -i 100 gradient
|
|
|
|
# Generate raw results for 10 iterations into cairo.perf
|
|
./cairo-perf-micro -r -i 10 > cairo.perf
|
|
# Append 10 more iterations of the paint test
|
|
./cairo-perf-micro -r -i 10 paint >> cairo.perf
|
|
|
|
Raw results aren't useful for reading directly, but are quite useful
|
|
when using cairo-perf-diff to compare separate runs (see more
|
|
below). The advantage of using the raw mode is that test runs can be
|
|
generated incrementally and appended to existing reports.
|
|
|
|
Generating comparisons of separate runs
|
|
---------------------------------------
|
|
It's often useful to generate a chart showing the comparison of two
|
|
separate runs of the cairo performance suite, (for example, after
|
|
applying a patch intended to improve cairo's performance). The
|
|
cairo-perf-diff script can be used to compare two report files
|
|
generated by cairo-perf.
|
|
|
|
Again, by way of example:
|
|
|
|
# Show performance changes from cairo-orig.perf to cairo-patched.perf
|
|
./cairo-perf-diff cairo-orig.perf cairo-patched.perf
|
|
|
|
This will work whether the data files were generate in raw mode (with
|
|
cairo-perf -r) or cooked, (cairo-perf without -r).
|
|
|
|
Finally, in its most powerful mode, cairo-perf-diff accepts two git
|
|
revisions and will do all the work of checking each revision out,
|
|
building it, running cairo-perf for each revision, and finally
|
|
generating the report. Obviously, this mode only works if you are
|
|
using cairo within a git repository, (and not from a tar file). Using
|
|
this mode is as simple as passing the git revisions to be compared to
|
|
cairo-perf-diff:
|
|
|
|
# Compare cairo 1.2.6 to cairo 1.4.0
|
|
./cairo-perf-diff 1.2.6 1.4.0
|
|
|
|
# Measure the impact of the latest commit
|
|
./cairo-perf-diff HEAD~1 HEAD
|
|
|
|
As a convenience, this common desire to measure a single commit is
|
|
supported by passing a single revision to cairo-perf-diff, in which
|
|
case it will compare it to the immediately preceding commit. So for
|
|
example:
|
|
|
|
# Measure the impact of the latest commit
|
|
./cairo-perf-diff HEAD
|
|
|
|
# Measure the impact of an arbitrary commit by SHA-1
|
|
./cairo-perf-diff aa883123d2af90
|
|
|
|
Also, when passing git revisions to cairo-perf-diff like this, it will
|
|
automatically cache results and re-use them rather than re-running
|
|
cairo-perf over and over on the same versions. This means that if you
|
|
ask for a report that you've generated in the past, cairo-perf-diff
|
|
should return it immediately.
|
|
|
|
Now, sometimes it is desirable to generate more iterations rather than
|
|
re-using cached results. In this case, the -f flag can be used to
|
|
force cairo-perf-diff to generate additional results in addition to
|
|
what has been cached:
|
|
|
|
# Measure the impact of latest commit (force more measurement)
|
|
./cairo-perf-diff -f
|
|
|
|
And finally, the -f mode is most useful in conjunction with the --
|
|
option to cairo-perf-diff which allows you to pass options to the
|
|
underlying cairo-perf runs. This allows you to restrict the additional
|
|
test runs to a limited subset of the tests.
|
|
|
|
For example, a frequently used trick is to first generate a chart with
|
|
a very small number of iterations for all tests:
|
|
|
|
./cairo-perf-diff HEAD
|
|
|
|
Then, if any of the results look suspicious, (say there's a slowdown
|
|
reported in the text tests, but you think the text test shouldn't be
|
|
affected), then you can force more iterations to be tested for only
|
|
those tests:
|
|
|
|
./cairo-perf-diff -f HEAD -- text
|
|
|
|
Generating comparisons of different backends
|
|
--------------------------------------------
|
|
An alternate question that is often asked is, "how does the speed of one
|
|
backend compare to another?". cairo-perf-compare-backends can read files
|
|
generated by cairo-perf and produces a comparison of the backends for every
|
|
test.
|
|
|
|
Again, by way of example:
|
|
|
|
# Show relative performance of the backends
|
|
./cairo-perf-compare-backends cairo.perf
|
|
|
|
This will work whether the data files were generate in raw mode (with
|
|
cairo-perf -r) or cooked, (cairo-perf without -r).
|
|
|
|
|
|
Creating a new performance test
|
|
-------------------------------
|
|
This is where we could use everybody's help. If you have encountered a
|
|
sequence of cairo operations that are slower than you would like, then
|
|
please provide a performance test. Writing a test is very simple, it
|
|
requires you to write only a small C file with a couple of functions,
|
|
one of which exercises the cairo calls of interest.
|
|
|
|
Here is the basic structure of a performance test file:
|
|
|
|
/* Copyright © 2006 Kind Cairo User
|
|
*
|
|
* ... Licensing information here ...
|
|
* Please copy the MIT blurb as in other tests
|
|
*/
|
|
|
|
#include "cairo-perf.h"
|
|
|
|
static cairo_time_t
|
|
do_my_new_test (cairo_t *cr, int width, int height)
|
|
{
|
|
cairo_perf_timer_start ();
|
|
|
|
/* Make the cairo calls to be measured */
|
|
|
|
cairo_perf_timer_stop ();
|
|
|
|
return cairo_perf_timer_elapsed ();
|
|
}
|
|
|
|
void
|
|
my_new_test (cairo_perf_t *perf, cairo_t *cr, int width, int height)
|
|
{
|
|
/* First do any setup for which the execution time should not
|
|
* be measured. For example, this might include loading
|
|
* images from disk, creating patterns, etc. */
|
|
|
|
/* Then launch the actual performance testing. */
|
|
cairo_perf_run (perf, "my_new_test", do_my_new_test);
|
|
|
|
/* Finally, perform any cleanup from the setup above. */
|
|
}
|
|
|
|
That's really all there is to writing a new test. The first function
|
|
above is the one that does the real work and returns a timing
|
|
number. The second function is the one that will be called by the
|
|
performance test rig (see below for how to accomplish that), and
|
|
allows for multiple performance cases to be written in one file,
|
|
(simply call cairo_perf_run once for each case, passing the
|
|
appropriate callback function to each).
|
|
|
|
We go through this dance of indirectly calling your own function
|
|
through cairo_perf_run so that cairo_perf_run can call your function
|
|
many times and measure statistical properties over the many runs.
|
|
|
|
Finally, to fully integrate your new test case you just need to add
|
|
your new test to three different lists. (TODO: We should set this up
|
|
better so that the lists are maintained automatically---computed from
|
|
the list of files in cairo/perf, for example). Here's what needs to be
|
|
added:
|
|
|
|
1. Makefile.am: Add the new file name to the cairo_perf_SOURCES list
|
|
|
|
2. cairo-perf.h: Add a new CAIRO_PERF_DECL line with the name of your
|
|
function, (my_new_test in the example above)
|
|
|
|
3. cairo-perf-micro.c: Add a new row to the list at the end of the
|
|
file. A typical entry would look like:
|
|
|
|
{ my_new_test, 16, 64 }
|
|
|
|
The last two numbers are a minimum and a maximum image size at
|
|
which your test should be exercised. If these values are the same,
|
|
then only that size will be used. If they are different, then
|
|
intermediate sizes will be used by doubling. So in the example
|
|
above, three tests would be performed at sizes of 16x16, 32x32 and
|
|
64x64.
|
|
|
|
|
|
How to run cairo-perf-diff on WINDOWS
|
|
-------------------------------------
|
|
This section explains the specifics of running cairo-perf-diff under
|
|
win32 plateforms. It assumes that you have installed a UNIX-like shell
|
|
environment such as MSYS (distributed as part of MinGW).
|
|
|
|
1. From your Mingw32 window, be sure to have all of your MSVC environ-
|
|
ment variables set up for proper compilation using 'make'
|
|
|
|
2. Add the %GitBaseDir%/Git/bin path to your environment, replacing the
|
|
%GitBaseDir% by whatever directory your Git version is installed to.
|
|
|
|
3. Comment out the "UNSET CDPATH" line in the git-sh-setup script
|
|
(located inside the ...Git/bin directory) by putting a "#" at the
|
|
beginning of the line.
|
|
|
|
you should be ready to go !
|
|
|
|
From your mingw32 window, go to your cairo/perf directory and run the
|
|
cairo-perf-diff script with the right arguments.
|
|
|
|
Thanks for your contributions and have fun with cairo!
|
|
|
|
TODO
|
|
----
|
|
Add a control language for crafting and running small sets of micro
|
|
benchmarks.
|