Compare commits

...

66 commits

Author SHA1 Message Date
Alexander Orzechowski
37f30dc8d0 drm: Make it clear that we tried to import a shm buffer
Reduces the guess work in logs
2026-05-06 19:56:19 -04:00
Kenny Levinsen
57441ded02 util/rect_union: Limit rect_union_add to 1024 rects
If a very large number of clip rects are accumulated in rect_union_add,
rect_union_evaluate can end up being disproportionately expensive, and
as an extreme numbers of clip rects are not beneficial for drawing, this
is without any benefit.

Limit the number of rects to 1024 in rect_union_add, switching over to
bounding box mode instead when the limit is exceeded. This leads to a
small 70% reduction in CPU time in the Vulkan renderer on the
stacked/clip200/1024 benchmarks.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
Kenny Levinsen
17c29268c9 util/rect_union: Take pixman_box32_t by pointer
rect_union_add takes a pixman_box32_t by value, and passes it along by
value to internal helpers. render_pass_mark_box_updated which is the
only caller receives the pixman_box32_t by reference, so just plumb it
through that way.

Results in a 13% improvement in CPU time when using the Vulkan renderer
on the stacked/clip200/1024 benchmarks on my machine.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
Kenny Levinsen
8abe53d1d2 render/vulkan: Use instanced draws instead of scissors
Similar to what we have already done for gles2. To simplify things we
use the staging ring buffer for the vertex buffers by extending the
usage bits, rather than introducing a separate pool.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
Kenny Levinsen
439258a43b render/vulkan: Intersect clip region once
We are spending quite significant CPU time walking through the clip
rects, taking a pixman box, converting it to a wlr box, intersecting it
and ultimately converting it back to a pixman box before adding it to
the rect union.

Just intersect the clip region once ahead of time, and use pixman boxes
the entire way. This also makes it easy to bail early if nothing
intersects.

Gives a small 97.95% reduction in CPU time for the Vulkan renderer in
the grid/clip200/1024 benchmark.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
Kenny Levinsen
b01fdc3164 render/vulkan: Add unit-test for staging buffer
Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
Kenny Levinsen
4ee896af99 render/vulkan: New staging buffer implementation
Implement a ring-buffer that uses timeline points to track and release
allocated spans. We add larger ring-buffers when it fills, and remove
them when they have been unused for many collection cycles.

Signed-off-by: Kenny Levinsen <kl@kl.wtf>
2026-05-01 12:50:04 +00:00
xurui
02abf1cd28 xwm: fix memory leak
Signed-off-by: xurui <xurui@kylinos.cn>
2026-04-29 14:02:38 +08:00
Félix Poisot
8d0597e3db render/dmabuf: lower log level for sync file import/export failure
On platforms that lack the corresponding ioctl, this particular error
is expected and recoverable
2026-04-22 17:25:02 +00:00
Félix Poisot
9740ec61c8 linux_drm_syncobj_v1: add _state_add_release_from_implicit_sync()
This can help to gradually convert existing components away from
`wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer()`
2026-04-22 17:18:48 +00:00
Consolatis
f053f77dca toplevel_capture: allocate new_request argument on the stack
This fixes a memory leak.
2026-04-21 21:27:19 +00:00
Consolatis
700ee83ab8 scene/surface: schedule on frame pacing output
Otherwise wlr_scene_output_send_frame_done() drops frame callbacks
if only parts of a window on a non frame-pacing output is damaged.
2026-04-21 21:17:46 +00:00
rewine
e3e8877aa6 xdg-shell/layer-shell: use same unmap assert 2026-04-21 21:15:30 +00:00
Kenny Levinsen
fba00c4a04 wlr_compositor: Apply state before updating surface_damage
When locking surface state, surface_cache_pending will move the pending
surface state to a new, empty `wlr_surface_state`. This new surface
state will only contain the fields committed in the pending state, as
surface_state_move does not copy anything else.

surface_update_damage is called before we move state from pending to
current to merge buffer damage and surface damage, and it expects that
the pending surface state still contains prior committed details such as
scale and transform. This is not the case when we finally commit the
cached surface state.

Move surface_update_damage after surface_state_move and make it operate
purely on the current surface state.
2026-04-21 21:12:05 +00:00
llyyr
70d99eefef render/vulkan: destroy image view on allocation failure 2026-04-19 14:49:46 +00:00
llyyr
b808144b7b render/vulkan: don't free sync file fd on failure in submit_stage_wait
The caller does this already on failure
2026-04-19 14:49:46 +00:00
llyyr
39ea626e8f render/vulkan: free all render pass allocations on failure
Instead of just the struct
2026-04-19 14:49:46 +00:00
llyyr
53b8352526 render/vulkan: make bool function return false instead of NULL 2026-04-19 14:49:46 +00:00
Guido Günther
9479b45642 render/gles2: Fix wording
Drop superfluous "is" and use "They" to refer to the states.

Signed-off-by: Guido Günther <agx@sigxcpu.org>
2026-04-19 15:33:56 +02:00
Johan Malm
e8c03e9ce9 wlr_ext_workspace_v1.c: add new workspaces to end of list
...so that panels/bars like xfce4-panel and waybar show workspaces in the
right order.
2026-04-12 10:11:13 +00:00
tokyo4j
8dd1a77615 ext-foreign-toplevel-list: add new toplevels to end of list
Same as the prior patch for wlr-foreign-toplevel-management.
2026-04-12 13:03:33 +09:00
tokyo4j
84b45e4157 wlr-foreign-toplevel-management: add new toplevels to end of list
Prior to this patch, when the client binds the manager, the existing
toplevel handles were notified in the opposite order of their creation,
as wl_list_insert() adds a new handle to the head of the list and
wl_list_for_each() iterates from head to tail.
2026-04-12 00:25:49 +09:00
Kenny Levinsen
35c35530a3 vulkan: Add support for render timer using timestamp queries 2026-04-09 15:13:49 +02:00
Dylan Donnell
8a9e3a84b5 ext_background_effect_v1: add implementation 2026-04-09 10:43:46 +00:00
Simon Ser
c393fb6bfa wlr_virtual_keyboard_v1: specify size when creating keymap
xkb_keymap_new_from_string() assumes that the string is
NULL-terminated, but we don't check this so the function might
access outside the mmap'ed region. Use the safer
xkb_keymap_new_from_buffer() function.

Reported-by: Julian Orth <ju.orth@gmail.com>
Closes: https://gitlab.freedesktop.org/wlroots/wlroots/-/work_items/4072
2026-04-07 09:43:08 +00:00
Isaac Freund
9de0ec3089 keyboard: fix modifiers event when no keymap set
Actually send the modifiers event when there is no keymap set.

Compositors may need lower-level "raw" access to the key/modifiers
events from the backend. Currently it is impossible for a compositor
to get modifier events from the backend without them being filtered
through an xkb_state controlled by wlroots.

I need this feature for river in order to fix some keyboard state
synchronization bugs.

Note that setting a keymap resets the modifiers so I don't think setting
wlr_keyboard->modifiers directly here is a concern.
2026-04-05 10:52:25 +00:00
Consolatis
c66a910753 render/pixman: fix bilinear filtering to match gles2 renderer
Before this patch the pixman renderer would use "constant padding"
for bilinear scaling which meant that the edges would either be dark
or turn transparent. The effect was most obvious when trying to scale
a single row buffer to a height like 100. The center would have the
desired color but the edges to both sides would fade into transparency.

We now use PIXMAN_REPEAT_PAD which clamps out-of-bound pixels and
seems to match the behavior of the gles2 renderer.
2026-04-02 00:59:44 +02:00
Isaac Freund
e22084f639
ext_image_capture_source_v1/scene: fix extents
Currently the width/height of the extents is too small if the first node
visited has position/dimensions 0,0,100,100 and the second node has
position/dimensions -20,-20,10,10.

In this case the current code calculates total extents as
-20,-20,100,100 but the correct extents are -20,-20,120,120.

References: https://codeberg.org/river/river-classic/issues/17
2026-03-30 17:23:02 +02:00
Simon Ser
334019f839 render/drm_syncobj: use drmSyncobjEventfd()
Avoids using a raw IOCTL directly.

This function was introduced way back in libdrm 2.4.116.
2026-03-27 18:09:07 +00:00
Simon Ser
a1ed6fca52 render/drm_syncobj: fix flags docs for wlr_drm_syncobj_timeline_waiter_init()
wlr_drm_syncobj_timeline_check() is a bit different because zero
will error out if the point has not materialized yet.

Kernel docs for struct drm_syncobj_eventfd:
https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#c.drm_syncobj_eventfd
2026-03-27 18:02:46 +00:00
Félix Poisot
f295d0322a render: explicit sync for wlr_texture_read_pixels() 2026-03-26 12:44:09 +00:00
Simon Zeni
413664e0b0 render/vulkan: compile against vulkan 1.2 header
Uses the EXT version of VK_PIPELINE_COMPILE_REQUIRED in `vulkan_strerror` func since it requires
Vulkan 1.3, switch to VK_EXT_global_priority instead of VK_KHR_global_priority which is only
promoted to core in Vulkan 1.3 as well.
2026-03-24 12:33:12 -04:00
Félix Poisot
fd870f6d27 linux_drm_syncobj_v1: fix handling of empty first commit
As reported in
https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4979#note_3385626,
bfd6e619fc did not correctly handle clients
that don't immediately follow their call to
`wp_linux_drm_syncobj_manager_v1.get_surface` with a commit attaching
a buffer

Fixes: bfd6e619fc
2026-03-20 15:36:16 +00:00
Simon Ser
4ca40004fd color_management_v1: ignore surface update if no-op
If the new image description is identical to the old one, skip the
event.
2026-03-19 21:46:28 +00:00
Simon Ser
7287f700ab color_management_v1: use early continue in surface loop 2026-03-19 21:46:28 +00:00
Simon Ser
8fe3034948 tinywl: bump wlroots version to 0.21 2026-03-19 20:23:35 +01:00
Simon Ser
627da39e76 build: bump version to 0.21.0-dev 2026-03-19 20:14:53 +01:00
Félix Poisot
8d454e1e34 output/drm: don't use OUT_FENCE_PTR
The returned fence is not required to be signalled at the earliest
possible time. It is not intended to replace the drm flip event, and is
expected to be signalled only much later
2026-03-17 18:14:35 +00:00
Félix Poisot
cd555f9261 backend/drm: properly delay syncobj signalling
DRM CRTC signals when scanout begins, but
wlr_output_state_set_signal_timeline() is defined to signal buffer
release. Delay to the next page flip
2026-03-17 18:14:35 +00:00
Félix Poisot
b2f6a390a4 scene: transfer sample syncobj to client timeline 2026-03-17 18:14:35 +00:00
Félix Poisot
bfd6e619fc linux_drm_syncobj_v1: add release point accumulation
This changes the behavior of wlr_linux_drm_syncobj_surface_v1 to
automatically signal release of previous commits as they are replaced.

Users must call wlr_linux_drm_syncobj_v1_state_add_release_point or
wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer to delay the
signal as appropriate.
2026-03-17 18:14:35 +00:00
Félix Poisot
e83a679e23 drm/syncobj: add timeline point merger utility 2026-03-17 18:14:35 +00:00
Félix Poisot
1f3d351abb scene: add buffer release point to 'sample' event 2026-03-17 18:14:35 +00:00
Félix Poisot
0af9b9d003 render/drm_syncobj: add wlr_drm_syncobj_timeline_signal() 2026-03-17 18:13:10 +00:00
David Turner
abb6eeb422 backend/drm/atomic: Add support for color representation
Basic implementation of color representation in drm/atomic: when buffers
are presented for scanout which have color-representation data attached,
set the correct color encoding and range on the plane.  If the plane
does not support color-representation then the commit will fail and the
caller can retry without color-representation.
2026-03-17 16:32:30 +00:00
David Turner
dca0703dac backend/drm: Add color_range/encoding properties
Add the following optional DRM properties, for use by the
color-representation-v1 protocol:
- COLOR_ENCODING
- COLOR_RANGE
2026-03-17 16:32:30 +00:00
David Turner
80bcef908b scene: Set color representation on scanout
When doing direct-scanout, if the surface has color-representation
metadata present then pass on that metadata to the output state.

Also, if a buffer has color representation IDENTITY+FULL then normalise
this to NONE+NONE which is equivalent.
2026-03-17 16:32:30 +00:00
David Turner
58c158dba6 output: Add color-representation to output state
Add color_representation to wlr_output_state, holding color
representation metadata about the primary buffer.  This can be set
using wlr_output_state_set_primary_color_representation() and a
new enum value WLR_OUTPUT_STATE_COLOR_REPRESENTATION in
wlr_output_state.committed indicates when this data is present.

Also add color-representation to wlr_output, and discard
color-representation in wlr_output_state if it matches what's already
been committed to the output.
2026-03-17 16:32:30 +00:00
Isaac Freund
1fa8bb8f7a
virtual-keyboard: handle seat destroy
We must make the virtual keyboard inert when the seat is destroyed.
2026-03-17 09:53:32 +01:00
Isaac Freund
ec746d3e3e virtual-keyboard: add wlr_virtual_keyboard_v1_from_resource()
I want to use the zwp_virtual_keyboard_v1 object in a custom river
protocol and need to be able to obtain the corresponding wlroots struct.
2026-03-17 08:45:12 +00:00
Scott Moreau
1fc928d528 wlr_ext_image_copy_capture_v1: Fix crash when client creates a cursor session not implemented server side
This guards against a crash where the server implements
wlr_ext_image_capture_source_v1_interface without setting .get_pointer_cursor().
In general, we should install a NULL check here because this is a crash
waiting to happen. Now, instead of crashing, the resource will be created and
the copy capture session will be stopped.
2026-03-14 19:04:42 +00:00
llyyr
3cb2cf9425 scene: use wl_list_for_each_safe to iterate outputs
The outputs loop in handle_scene_buffer_outputs_update may remove entries
from the list while iterating, so use wl_list_for_each_safe instead of
wl_list_for_each.

Fixes: 39e918edc8 ("scene: avoid redundant wl_surface.enter/leave events")
2026-03-13 23:32:06 +05:30
Alexander Orzechowski
7f87c7fe90 wlr_scene: Nuke buffer output_enter / output_leave
outputs_update should be used instead.
2026-03-13 13:19:06 -04:00
Isaac Freund
39e918edc8
scene: avoid redundant wl_surface.enter/leave events
Currently we send wl_surface.enter/leave when a surface is hidden
and shown again on the same output. In practice, this happens very
often since compositors like river and sway enable and disable
the scene nodes of surfaces as part of their atomic transaction
strategy involving rendering saved buffers while waiting for
clients to submit new buffers of the desired size.

The new strategy documented in the new comments avoids sending
redundant events in this case.
2026-03-13 17:59:13 +01:00
Diego Viola
736c0f3f25 wlr-export-dmabuf-unstable-v1: fix typo 2026-03-12 11:03:57 +00:00
Jonathan Marler
3c8d199ec1 backend/x11: ignore DestroyNotify events
The X11 backend subscribes to StructureNotify events, so when
output_destroy() calls xcb_destroy_window() the server sends a
DestroyNotify back. This is expected and harmless but was logged
as an unhandled event. Silence it the same way MAP_NOTIFY and
UNMAP_NOTIFY are already silenced.
2026-03-10 22:44:56 -06:00
Kenny Levinsen
7ccef7d9eb Adopt wlr_box_intersects where useful
This makes wlr_scene_node_at roughly 50% faster, and gives a minor boost
to node modification as well.

Before:

create test tree:               7030 nodes, 473.510 s, 15 nodes/ms
wlr_scene_node_at:              10000 iters, 894.945 s, 78552 nodes/ms (hits: 10/10000)
wlr_scene_node_for_each_buffer: 10000 iters, 330.597 s, 212646 nodes/ms (hits: 0/10000)

After:

create test tree:               7030 nodes, 385.930 s, 18 nodes/ms
wlr_scene_node_at:              10000 iters, 586.013 s, 119963 nodes/ms (hits: 10/10000)
wlr_scene_node_for_each_buffer: 10000 iters, 334.559 s, 210127 nodes/ms (hits: 0/10000)
2026-03-09 22:09:40 +00:00
Kenny Levinsen
2938c10cd3 ci: Run tests and benchmarks 2026-03-09 22:09:40 +00:00
Kenny Levinsen
648790f43a tests: Initial test and benchmark setup
Add a unit test for wlr_box and benchmark for wlr_scene_node_at as our
first test examples, lowering the barrier for adding more tests as
suitable.
2026-03-09 22:09:40 +00:00
Kenny Levinsen
ff7d093800 util/box: Add wlr_box_intersects
wlr_box_intersection generates a new box based on the intersection of
two boxes. Often we simply want to know *if* two boxes intersected,
which we can answer much cheaper.

Add wlr_box_intersects, in similar vein as wlr_box_contains_box but
returning true for any overlap.
2026-03-09 22:09:40 +00:00
Kenny Levinsen
285cee5f3a util/box: Use integer min/max for intersection
wlr_box_intersection only operates on integers, so we shouldn't use
fmin/fmax. Do the usual and add a local integer min/max helper.
2026-03-09 22:09:40 +00:00
Christopher Snowhill
9a931d9ffa scene: fix color format compare
bool doesn't really support negative values.

Fixes: 7cb3393e7 (scene: send color_management_v1 surface feedback)
2026-03-06 18:44:26 -08:00
hrdl
3dafaa4df3 render/vulkan: relax minimum Vulkan API version to 1.0
This allows using the vulkan renderer on platforms that provide all
the necessary Vulkan extensions.

Tested on a Mali G52 platform with Mesa 26.0.0 and 25.3.5, which only
support Vulkan API 1.0.
2026-03-06 15:45:05 +01:00
Simon Zeni
67ce318b1f ci: update dalligi upstream repo 2026-03-06 09:37:31 -05:00
YaoBing Xiao
14b3c96c1e treewide: make type-check helpers take const pointers 2026-03-06 16:04:21 +08:00
Andri Yngvason
3336d28813 image_capture_source/output: Update constraints on enable
Without observing the enable event, clients receive no pixel formats and
buffer dimensions are reported as 0 after an output has been re-enabled.
2026-03-05 21:28:55 +00:00
92 changed files with 2359 additions and 655 deletions

View file

@ -35,6 +35,9 @@ tasks:
cd wlroots
ninja -C build
sudo ninja -C build install
- test: |
cd wlroots
meson test -C build --verbose
- build-features-disabled: |
cd wlroots
meson setup build --reconfigure -Dauto_features=disabled

View file

@ -37,6 +37,10 @@ tasks:
- clang: |
cd wlroots/build-clang
ninja
- test: |
cd wlroots/build-gcc
meson test --verbose
meson test --benchmark --verbose
- smoke-test: |
cd wlroots/build-gcc/tinywl
sudo modprobe vkms

View file

@ -32,6 +32,9 @@ tasks:
meson setup build --fatal-meson-warnings -Dauto_features=enabled -Dallocators=gbm
ninja -C build
sudo ninja -C build install
- test: |
cd wlroots
meson test -C build --verbose
- tinywl: |
cd wlroots/tinywl
make

View file

@ -1,4 +1,4 @@
include: https://git.sr.ht/~emersion/dalligi/blob/master/templates/multi.yml
include: https://gitlab.freedesktop.org/emersion/dalligi/-/raw/master/templates/multi.yml
alpine:
extends: .dalligi
pages: true

View file

@ -1,3 +1,4 @@
#include <assert.h>
#include <drm_fourcc.h>
#include <stdlib.h>
#include <stdio.h>
@ -423,12 +424,6 @@ void drm_atomic_connector_apply_commit(struct wlr_drm_connector_state *state) {
if (state->primary_in_fence_fd >= 0) {
close(state->primary_in_fence_fd);
}
if (state->out_fence_fd >= 0) {
// TODO: error handling
wlr_drm_syncobj_timeline_import_sync_file(state->base->signal_timeline,
state->base->signal_point, state->out_fence_fd);
close(state->out_fence_fd);
}
conn->colorspace = state->colorspace;
}
@ -446,9 +441,6 @@ void drm_atomic_connector_rollback_commit(struct wlr_drm_connector_state *state)
if (state->primary_in_fence_fd >= 0) {
close(state->primary_in_fence_fd);
}
if (state->out_fence_fd >= 0) {
close(state->out_fence_fd);
}
}
static void plane_disable(struct atomic *atom, struct wlr_drm_plane *plane) {
@ -484,6 +476,62 @@ static void set_plane_props(struct atomic *atom, struct wlr_drm_backend *drm,
atomic_add(atom, id, props->crtc_h, dst_box->height);
}
static void set_color_encoding_and_range(struct atomic *atom,
struct wlr_drm_backend *drm, struct wlr_drm_plane *plane,
enum wlr_color_encoding encoding, enum wlr_color_range range) {
uint32_t id = plane->id;
const struct wlr_drm_plane_props *props = &plane->props;
uint32_t color_encoding;
switch (encoding) {
case WLR_COLOR_ENCODING_NONE:
case WLR_COLOR_ENCODING_BT601:
color_encoding = WLR_DRM_COLOR_YCBCR_BT601;
break;
case WLR_COLOR_ENCODING_BT709:
color_encoding = WLR_DRM_COLOR_YCBCR_BT709;
break;
case WLR_COLOR_ENCODING_BT2020:
color_encoding = WLR_DRM_COLOR_YCBCR_BT2020;
break;
default:
wlr_log(WLR_DEBUG, "Unsupported color encoding %d", encoding);
atom->failed = true;
return;
}
if (props->color_encoding) {
atomic_add(atom, id, props->color_encoding, color_encoding);
} else {
wlr_log(WLR_DEBUG, "Plane %"PRIu32" is missing the COLOR_ENCODING property",
id);
atom->failed = true;
return;
}
uint32_t color_range;
switch (range) {
case WLR_COLOR_RANGE_NONE:
case WLR_COLOR_RANGE_LIMITED:
color_range = WLR_DRM_COLOR_YCBCR_LIMITED_RANGE;
break;
case WLR_COLOR_RANGE_FULL:
color_range = WLR_DRM_COLOR_YCBCR_FULL_RANGE;
break;
default:
assert(0); // Unreachable
}
if (props->color_range) {
atomic_add(atom, id, props->color_range, color_range);
} else {
wlr_log(WLR_DEBUG, "Plane %"PRIu32" is missing the COLOR_RANGE property",
id);
atom->failed = true;
return;
}
}
static bool supports_cursor_hotspots(const struct wlr_drm_plane *plane) {
return plane->props.hotspot_x && plane->props.hotspot_y;
}
@ -500,19 +548,6 @@ static void set_plane_in_fence_fd(struct atomic *atom,
atomic_add(atom, plane->id, plane->props.in_fence_fd, sync_file_fd);
}
static void set_crtc_out_fence_ptr(struct atomic *atom, struct wlr_drm_crtc *crtc,
int *fd_ptr) {
if (!crtc->props.out_fence_ptr) {
wlr_log(WLR_ERROR,
"CRTC %"PRIu32" is missing the OUT_FENCE_PTR property",
crtc->id);
atom->failed = true;
return;
}
atomic_add(atom, crtc->id, crtc->props.out_fence_ptr, (uintptr_t)fd_ptr);
}
static void atomic_connector_add(struct atomic *atom,
struct wlr_drm_connector_state *state, bool modeset) {
struct wlr_drm_connector *conn = state->connector;
@ -550,6 +585,10 @@ static void atomic_connector_add(struct atomic *atom,
set_plane_props(atom, drm, crtc->primary, state->primary_fb, crtc->id,
&state->primary_viewport.dst_box, &state->primary_viewport.src_box);
if (state->base->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
set_color_encoding_and_range(atom, drm, crtc->primary,
state->base->color_encoding, state->base->color_range);
}
if (crtc->primary->props.fb_damage_clips != 0) {
atomic_add(atom, crtc->primary->id,
crtc->primary->props.fb_damage_clips, state->fb_damage_clips);
@ -557,9 +596,6 @@ static void atomic_connector_add(struct atomic *atom,
if (state->primary_in_fence_fd >= 0) {
set_plane_in_fence_fd(atom, crtc->primary, state->primary_in_fence_fd);
}
if (state->base->committed & WLR_OUTPUT_STATE_SIGNAL_TIMELINE) {
set_crtc_out_fence_ptr(atom, crtc, &state->out_fence_fd);
}
if (crtc->cursor) {
if (drm_connector_is_cursor_visible(conn)) {
struct wlr_fbox cursor_src = {

View file

@ -95,7 +95,7 @@ static const struct wlr_backend_impl backend_impl = {
.commit = backend_commit,
};
bool wlr_backend_is_drm(struct wlr_backend *b) {
bool wlr_backend_is_drm(const struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -43,7 +43,8 @@ static const uint32_t COMMIT_OUTPUT_STATE =
WLR_OUTPUT_STATE_WAIT_TIMELINE |
WLR_OUTPUT_STATE_SIGNAL_TIMELINE |
WLR_OUTPUT_STATE_COLOR_TRANSFORM |
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION;
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION |
WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
static const uint32_t SUPPORTED_OUTPUT_STATE =
WLR_OUTPUT_STATE_BACKEND_OPTIONAL | COMMIT_OUTPUT_STATE;
@ -367,7 +368,17 @@ static void drm_plane_finish_surface(struct wlr_drm_plane *plane) {
}
drm_fb_clear(&plane->queued_fb);
if (plane->queued_release_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(plane->queued_release_timeline, plane->queued_release_point);
wlr_drm_syncobj_timeline_unref(plane->queued_release_timeline);
plane->queued_release_timeline = NULL;
}
drm_fb_clear(&plane->current_fb);
if (plane->current_release_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(plane->current_release_timeline, plane->current_release_point);
wlr_drm_syncobj_timeline_unref(plane->current_release_timeline);
plane->current_release_timeline = NULL;
}
finish_drm_surface(&plane->mgpu_surf);
}
@ -556,6 +567,18 @@ static void drm_connector_apply_commit(const struct wlr_drm_connector_state *sta
struct wlr_drm_crtc *crtc = conn->crtc;
drm_fb_copy(&crtc->primary->queued_fb, state->primary_fb);
if (crtc->primary->queued_release_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(crtc->primary->queued_release_timeline, crtc->primary->queued_release_point);
wlr_drm_syncobj_timeline_unref(crtc->primary->queued_release_timeline);
}
if (state->base->signal_timeline != NULL) {
crtc->primary->queued_release_timeline = wlr_drm_syncobj_timeline_ref(state->base->signal_timeline);
crtc->primary->queued_release_point = state->base->signal_point;
} else {
crtc->primary->queued_release_timeline = NULL;
crtc->primary->queued_release_point = 0;
}
crtc->primary->viewport = state->primary_viewport;
if (crtc->cursor != NULL) {
drm_fb_copy(&crtc->cursor->queued_fb, state->cursor_fb);
@ -646,7 +669,6 @@ static void drm_connector_state_init(struct wlr_drm_connector_state *state,
.base = base,
.active = output_pending_enabled(&conn->output, base),
.primary_in_fence_fd = -1,
.out_fence_fd = -1,
};
struct wlr_output_mode *mode = conn->output.current_mode;
@ -1299,7 +1321,7 @@ static const struct wlr_output_impl output_impl = {
.get_primary_formats = drm_connector_get_primary_formats,
};
bool wlr_output_is_drm(struct wlr_output *output) {
bool wlr_output_is_drm(const struct wlr_output *output) {
return output->impl == &output_impl;
}
@ -2018,6 +2040,14 @@ static void handle_page_flip(int fd, unsigned seq,
struct wlr_drm_plane *plane = conn->crtc->primary;
if (plane->queued_fb) {
drm_fb_move(&plane->current_fb, &plane->queued_fb);
if (plane->current_release_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(plane->current_release_timeline, plane->current_release_point);
wlr_drm_syncobj_timeline_unref(plane->current_release_timeline);
}
plane->current_release_timeline = plane->queued_release_timeline;
plane->current_release_point = plane->queued_release_point;
plane->queued_release_timeline = NULL;
plane->queued_release_point = 0;
}
if (conn->crtc->cursor && conn->crtc->cursor->queued_fb) {
drm_fb_move(&conn->crtc->cursor->current_fb,

View file

@ -144,7 +144,13 @@ static struct wlr_drm_fb *drm_fb_create(struct wlr_drm_backend *drm,
struct wlr_buffer *buf, const struct wlr_drm_format_set *formats) {
struct wlr_dmabuf_attributes attribs;
if (!wlr_buffer_get_dmabuf(buf, &attribs)) {
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from buffer");
struct wlr_shm_attributes shm;
if (wlr_buffer_get_shm(buf, &shm)) {
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from shm buffer");
} else {
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from buffer");
}
return NULL;
}

View file

@ -352,10 +352,6 @@ static bool add_connector(drmModeAtomicReq *req,
liftoff_layer_set_property(crtc->liftoff_composition_layer,
"IN_FENCE_FD", state->primary_in_fence_fd);
}
if (state->base->committed & WLR_OUTPUT_STATE_SIGNAL_TIMELINE) {
ok = ok && add_prop(req, crtc->id, crtc->props.out_fence_ptr,
(uintptr_t)&state->out_fence_fd);
}
if (state->base->committed & WLR_OUTPUT_STATE_LAYERS) {
for (size_t i = 0; i < state->base->layers_len; i++) {

View file

@ -50,6 +50,8 @@ static const struct prop_info crtc_info[] = {
static const struct prop_info plane_info[] = {
#define INDEX(name) (offsetof(struct wlr_drm_plane_props, name) / sizeof(uint32_t))
{ "COLOR_ENCODING", INDEX(color_encoding) },
{ "COLOR_RANGE", INDEX(color_range) },
{ "CRTC_H", INDEX(crtc_h) },
{ "CRTC_ID", INDEX(crtc_id) },
{ "CRTC_W", INDEX(crtc_w) },

View file

@ -81,6 +81,6 @@ struct wlr_backend *wlr_headless_backend_create(struct wl_event_loop *loop) {
return &backend->backend;
}
bool wlr_backend_is_headless(struct wlr_backend *backend) {
bool wlr_backend_is_headless(const struct wlr_backend *backend) {
return backend->impl == &backend_impl;
}

View file

@ -106,7 +106,7 @@ static const struct wlr_output_impl output_impl = {
.move_cursor = output_move_cursor,
};
bool wlr_output_is_headless(struct wlr_output *wlr_output) {
bool wlr_output_is_headless(const struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -155,7 +155,7 @@ static const struct wlr_backend_impl backend_impl = {
.destroy = backend_destroy,
};
bool wlr_backend_is_libinput(struct wlr_backend *b) {
bool wlr_backend_is_libinput(const struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -173,7 +173,7 @@ struct wlr_backend *wlr_multi_backend_create(struct wl_event_loop *loop) {
return &backend->backend;
}
bool wlr_backend_is_multi(struct wlr_backend *b) {
bool wlr_backend_is_multi(const struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -577,7 +577,7 @@ static const struct wlr_backend_impl backend_impl = {
.get_drm_fd = backend_get_drm_fd,
};
bool wlr_backend_is_wl(struct wlr_backend *b) {
bool wlr_backend_is_wl(const struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -1027,7 +1027,7 @@ static const struct wlr_output_impl output_impl = {
.get_primary_formats = output_get_formats,
};
bool wlr_output_is_wl(struct wlr_output *wlr_output) {
bool wlr_output_is_wl(const struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -115,6 +115,7 @@ static void handle_x11_event(struct wlr_x11_backend *x11,
handle_x11_error(x11, ev);
break;
}
case XCB_DESTROY_NOTIFY:
case XCB_UNMAP_NOTIFY:
case XCB_MAP_NOTIFY:
break;
@ -218,7 +219,7 @@ static const struct wlr_backend_impl backend_impl = {
.get_drm_fd = backend_get_drm_fd,
};
bool wlr_backend_is_x11(struct wlr_backend *backend) {
bool wlr_backend_is_x11(const struct wlr_backend *backend) {
return backend->impl == &backend_impl;
}

View file

@ -711,7 +711,7 @@ void handle_x11_configure_notify(struct wlr_x11_output *output,
wlr_output_state_finish(&state);
}
bool wlr_output_is_x11(struct wlr_output *wlr_output) {
bool wlr_output_is_x11(const struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -29,8 +29,12 @@ struct wlr_drm_plane {
/* Buffer submitted to the kernel, will be presented on next vblank */
struct wlr_drm_fb *queued_fb;
struct wlr_drm_syncobj_timeline *queued_release_timeline;
uint64_t queued_release_point;
/* Buffer currently displayed on screen */
struct wlr_drm_fb *current_fb;
struct wlr_drm_syncobj_timeline *current_release_timeline;
uint64_t current_release_point;
/* Viewport belonging to the last committed fb */
struct wlr_drm_viewport viewport;
@ -156,7 +160,7 @@ struct wlr_drm_connector_state {
uint32_t mode_id;
uint32_t gamma_lut;
uint32_t fb_damage_clips;
int primary_in_fence_fd, out_fence_fd;
int primary_in_fence_fd;
bool vrr_enabled;
uint32_t colorspace;
uint32_t hdr_output_metadata;

View file

@ -65,6 +65,22 @@ struct wlr_drm_plane_props {
uint32_t hotspot_x;
uint32_t hotspot_y;
uint32_t in_fence_fd;
uint32_t color_encoding; // Not guaranteed to exist
uint32_t color_range; // Not guaranteed to exist
};
// Equivalent to wlr_drm_color_encoding defined in the kernel (but not exported)
enum wlr_drm_color_encoding {
WLR_DRM_COLOR_YCBCR_BT601,
WLR_DRM_COLOR_YCBCR_BT709,
WLR_DRM_COLOR_YCBCR_BT2020,
};
// Equivalent to wlr_drm_color_range defined in the kernel (but not exported)
enum wlr_drm_color_range {
WLR_DRM_COLOR_YCBCR_FULL_RANGE,
WLR_DRM_COLOR_YCBCR_LIMITED_RANGE,
};
bool get_drm_connector_props(int fd, uint32_t id,

View file

@ -0,0 +1,64 @@
#ifndef WLR_RENDER_DRM_SYNCOBJ_MERGER_H
#define WLR_RENDER_DRM_SYNCOBJ_MERGER_H
#include <wayland-server-core.h>
struct wlr_buffer;
/**
* Accumulate timeline points, to have a destination timeline point be
* signalled when all inputs are
*/
struct wlr_drm_syncobj_merger {
int n_ref;
struct wlr_drm_syncobj_timeline *dst_timeline;
uint64_t dst_point;
int sync_fd;
};
/**
* Create a new merger.
*
* The given timeline point will be signalled when all input points are
* signalled and the merger is destroyed.
*/
struct wlr_drm_syncobj_merger *wlr_drm_syncobj_merger_create(
struct wlr_drm_syncobj_timeline *dst_timeline, uint64_t dst_point);
struct wlr_drm_syncobj_merger *wlr_drm_syncobj_merger_ref(
struct wlr_drm_syncobj_merger *merger);
/**
* Target timeline point is materialized when all inputs are, and the merger is
* destroyed.
*/
void wlr_drm_syncobj_merger_unref(struct wlr_drm_syncobj_merger *merger);
/**
* Add a new timeline point to wait for.
*
* If the point is not materialized, the supplied event loop is used to schedule
* a wait.
*/
bool wlr_drm_syncobj_merger_add(struct wlr_drm_syncobj_merger *merger,
struct wlr_drm_syncobj_timeline *dst_timeline, uint64_t dst_point,
struct wl_event_loop *loop);
/**
* Add a new sync file to wait for.
*
* Ownership of fd is transferred to the merger.
*/
bool wlr_drm_syncobj_merger_add_sync_file(struct wlr_drm_syncobj_merger *merger,
int fd);
/**
* Add a new DMA-BUF release to wait for.
*
* Waits for write access.
* If the platform does not support DMA-BUF<->sync file interop, the supplied
* event_loop is used to schedule a wait.
*/
bool wlr_drm_syncobj_merger_add_dmabuf(struct wlr_drm_syncobj_merger *merger,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop);
#endif

View file

@ -119,8 +119,8 @@ struct wlr_gles2_texture {
GLenum target;
// If this texture is imported from a buffer, the texture is does not own
// these states. These cannot be destroyed along with the texture in this
// If this texture is imported from a buffer, the texture does not own
// these states. They cannot be destroyed along with the texture in this
// case.
GLuint tex;
GLuint fbo;

View file

@ -56,6 +56,10 @@ struct wlr_vk_device {
PFN_vkGetSemaphoreFdKHR vkGetSemaphoreFdKHR;
PFN_vkImportSemaphoreFdKHR vkImportSemaphoreFdKHR;
PFN_vkQueueSubmit2KHR vkQueueSubmit2KHR;
PFN_vkBindImageMemory2KHR vkBindImageMemory2KHR;
PFN_vkCreateSamplerYcbcrConversionKHR vkCreateSamplerYcbcrConversionKHR;
PFN_vkDestroySamplerYcbcrConversionKHR vkDestroySamplerYcbcrConversionKHR;
PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
} api;
uint32_t format_prop_count;
@ -63,6 +67,9 @@ struct wlr_vk_device {
struct wlr_drm_format_set dmabuf_render_formats;
struct wlr_drm_format_set dmabuf_texture_formats;
struct wlr_drm_format_set shm_texture_formats;
float timestamp_period;
uint32_t timestamp_valid_bits;
};
// Tries to find the VkPhysicalDevice for the given drm fd.
@ -277,8 +284,6 @@ struct wlr_vk_command_buffer {
uint64_t timeline_point;
// Textures to destroy after the command buffer completes
struct wl_list destroy_textures; // wlr_vk_texture.destroy_link
// Staging shared buffers to release after the command buffer completes
struct wl_list stage_buffers; // wlr_vk_shared_buffer.link
// Color transform to unref after the command buffer completes
struct wlr_color_transform *color_transform;
@ -345,7 +350,7 @@ struct wlr_vk_renderer {
struct {
struct wlr_vk_command_buffer *cb;
uint64_t last_timeline_point;
struct wl_list buffers; // wlr_vk_shared_buffer.link
struct wl_list buffers; // wlr_vk_stage_buffer.link
} stage;
struct {
@ -406,7 +411,13 @@ VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer);
// Submits the current stage command buffer and waits until it has
// finished execution.
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer);
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_file_fd);
struct wlr_vk_render_timer {
struct wlr_render_timer base;
struct wlr_vk_renderer *renderer;
VkQueryPool query_pool;
};
struct wlr_vk_render_pass_texture {
struct wlr_vk_texture *texture;
@ -432,20 +443,28 @@ struct wlr_vk_render_pass {
struct wlr_drm_syncobj_timeline *signal_timeline;
uint64_t signal_point;
struct wlr_vk_render_timer *timer;
struct wl_array textures; // struct wlr_vk_render_pass_texture
};
struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *renderer,
struct wlr_vk_render_buffer *buffer, const struct wlr_buffer_pass_options *options);
// Suballocates a buffer span with the given size that can be mapped
// and used as staging buffer. The allocation is implicitly released when the
// stage cb has finished execution. The start of the span will be a multiple
// of the given alignment.
// Suballocates a buffer span with the given size from the staging ring buffer
// that is mapped for CPU access. vulkan_stage_mark_submit must be called after
// allocations are made to mark the timeline point after which the allocations
// will be released. The start of the span will be a multiple of alignment.
struct wlr_vk_buffer_span vulkan_get_stage_span(
struct wlr_vk_renderer *renderer, VkDeviceSize size,
VkDeviceSize alignment);
// Records a watermark on all staging buffers with new allocations with the
// specified timeline point. Once the timeline point is passed, the span will
// be reclaimed by vulkan_stage_buffer_reclaim.
void vulkan_stage_mark_submit(struct wlr_vk_renderer *renderer,
uint64_t timeline_point);
// Tries to allocate a texture descriptor set. Will additionally
// return the pool it was allocated from when successful (for freeing it later).
struct wlr_vk_descriptor_pool *vulkan_alloc_texture_ds(
@ -472,6 +491,8 @@ uint64_t vulkan_end_command_buffer(struct wlr_vk_command_buffer *cb,
void vulkan_reset_command_buffer(struct wlr_vk_command_buffer *cb);
bool vulkan_wait_command_buffer(struct wlr_vk_command_buffer *cb,
struct wlr_vk_renderer *renderer);
VkSemaphore vulkan_command_buffer_wait_sync_file(struct wlr_vk_renderer *renderer,
struct wlr_vk_command_buffer *render_cb, size_t sem_index, int sync_file_fd);
bool vulkan_sync_render_pass_release(struct wlr_vk_renderer *renderer,
struct wlr_vk_render_pass *pass);
@ -484,7 +505,8 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VkFormat src_format, VkImage src_image,
uint32_t drm_format, uint32_t stride,
uint32_t width, uint32_t height, uint32_t src_x, uint32_t src_y,
uint32_t dst_x, uint32_t dst_y, void *data);
uint32_t dst_x, uint32_t dst_y, void *data,
struct wlr_drm_syncobj_timeline *wait_timeline, uint64_t wait_point);
// State (e.g. image texture) associated with a surface.
struct wlr_vk_texture {
@ -526,29 +548,43 @@ struct wlr_vk_descriptor_pool {
struct wl_list link; // wlr_vk_renderer.descriptor_pools
};
struct wlr_vk_allocation {
VkDeviceSize start;
VkDeviceSize size;
struct wlr_vk_stage_watermark {
VkDeviceSize head;
uint64_t timeline_point;
};
// List of suballocated staging buffers.
// Used to upload to/read from device local images.
struct wlr_vk_shared_buffer {
struct wl_list link; // wlr_vk_renderer.stage.buffers or wlr_vk_command_buffer.stage_buffers
// Ring buffer for staging transfers
struct wlr_vk_stage_buffer {
struct wl_list link; // wlr_vk_renderer.stage.buffers
VkBuffer buffer;
VkDeviceMemory memory;
VkDeviceSize buf_size;
void *cpu_mapping;
struct wl_array allocs; // struct wlr_vk_allocation
int64_t last_used_ms;
VkDeviceSize head;
VkDeviceSize tail;
struct wl_array watermarks; // struct wlr_vk_stage_watermark
int empty_gc_cnt;
};
// Suballocated range on a buffer.
// Suballocated range on a staging ring buffer.
struct wlr_vk_buffer_span {
struct wlr_vk_shared_buffer *buffer;
struct wlr_vk_allocation alloc;
struct wlr_vk_stage_buffer *buffer;
VkDeviceSize offset;
VkDeviceSize size;
};
// Suballocate a span of size bytes from a staging ring buffer, with the
// returned offset rounded up to the given alignment. Returns the byte offset
// of the allocation, or (VkDeviceSize)-1 if the buffer is too full to fit it.
VkDeviceSize vulkan_stage_buffer_alloc(struct wlr_vk_stage_buffer *buf,
VkDeviceSize size, VkDeviceSize alignment);
// Free all allocations covered by watermarks whose timeline point has been
// reached.
void vulkan_stage_buffer_reclaim(struct wlr_vk_stage_buffer *buf,
uint64_t current_point);
// Prepared form for a color transform
struct wlr_vk_color_transform {

View file

@ -58,13 +58,15 @@ void rect_union_finish(struct rect_union *r);
*
* Amortized time: O(1)
*/
void rect_union_add(struct rect_union *r, pixman_box32_t box);
void rect_union_add(struct rect_union *r, const pixman_box32_t *box);
/**
* Compute an exact cover of the rectangles added so far, and return
* a pointer to a pixman_region32_t giving that cover. The pointer will
* remain valid until the next time *r is modified. If there was an allocation
* failure, this function may return a single-rectangle bounding box instead.
* remain valid until the next time *r is modified.
*
* An internal complexity limit is enforced by rect_union. If exceeded, this
* function will instead return a single-rectangle bounding box.
*
* This may be called multiple times and interleaved with rect_union_add().
*

View file

@ -39,8 +39,8 @@ struct wlr_drm_lease {
struct wlr_backend *wlr_drm_backend_create(struct wlr_session *session,
struct wlr_device *dev, struct wlr_backend *parent);
bool wlr_backend_is_drm(struct wlr_backend *backend);
bool wlr_output_is_drm(struct wlr_output *output);
bool wlr_backend_is_drm(const struct wlr_backend *backend);
bool wlr_output_is_drm(const struct wlr_output *output);
/**
* Get the parent DRM backend, if any.

View file

@ -25,7 +25,7 @@ struct wlr_backend *wlr_headless_backend_create(struct wl_event_loop *loop);
struct wlr_output *wlr_headless_add_output(struct wlr_backend *backend,
unsigned int width, unsigned int height);
bool wlr_backend_is_headless(struct wlr_backend *backend);
bool wlr_output_is_headless(struct wlr_output *output);
bool wlr_backend_is_headless(const struct wlr_backend *backend);
bool wlr_output_is_headless(const struct wlr_output *output);
#endif

View file

@ -29,7 +29,7 @@ struct libinput_device *wlr_libinput_get_device_handle(
struct libinput_tablet_tool *wlr_libinput_get_tablet_tool_handle(
struct wlr_tablet_tool *wlr_tablet_tool);
bool wlr_backend_is_libinput(struct wlr_backend *backend);
bool wlr_backend_is_libinput(const struct wlr_backend *backend);
bool wlr_input_device_is_libinput(struct wlr_input_device *device);
#endif

View file

@ -26,7 +26,7 @@ bool wlr_multi_backend_add(struct wlr_backend *multi,
void wlr_multi_backend_remove(struct wlr_backend *multi,
struct wlr_backend *backend);
bool wlr_backend_is_multi(struct wlr_backend *backend);
bool wlr_backend_is_multi(const struct wlr_backend *backend);
bool wlr_multi_is_empty(struct wlr_backend *backend);
void wlr_multi_for_each_backend(struct wlr_backend *backend,

View file

@ -46,7 +46,7 @@ struct wlr_output *wlr_wl_output_create_from_surface(struct wlr_backend *backend
/**
* Check whether the provided backend is a Wayland backend.
*/
bool wlr_backend_is_wl(struct wlr_backend *backend);
bool wlr_backend_is_wl(const struct wlr_backend *backend);
/**
* Check whether the provided input device is a Wayland input device.
@ -56,7 +56,7 @@ bool wlr_input_device_is_wl(struct wlr_input_device *device);
/**
* Check whether the provided output device is a Wayland output device.
*/
bool wlr_output_is_wl(struct wlr_output *output);
bool wlr_output_is_wl(const struct wlr_output *output);
/**
* Sets the title of a struct wlr_output which is a Wayland toplevel.

View file

@ -31,7 +31,7 @@ struct wlr_output *wlr_x11_output_create(struct wlr_backend *backend);
/**
* Check whether this backend is an X11 backend.
*/
bool wlr_backend_is_x11(struct wlr_backend *backend);
bool wlr_backend_is_x11(const struct wlr_backend *backend);
/**
* Check whether this input device is an X11 input device.
@ -41,7 +41,7 @@ bool wlr_input_device_is_x11(struct wlr_input_device *device);
/**
* Check whether this output device is an X11 output device.
*/
bool wlr_output_is_x11(struct wlr_output *output);
bool wlr_output_is_x11(const struct wlr_output *output);
/**
* Sets the title of a struct wlr_output which is an X11 window.

View file

@ -90,10 +90,18 @@ bool wlr_drm_syncobj_timeline_transfer(struct wlr_drm_syncobj_timeline *dst,
*/
bool wlr_drm_syncobj_timeline_check(struct wlr_drm_syncobj_timeline *timeline,
uint64_t point, uint32_t flags, bool *result);
/**
* Signals a timeline point.
*/
bool wlr_drm_syncobj_timeline_signal(struct wlr_drm_syncobj_timeline *timeline, uint64_t point);
/**
* Asynchronously wait for a timeline point.
*
* See wlr_drm_syncobj_timeline_check() for a definition of flags.
* Flags can be:
*
* - 0 to wait for the point to be signalled
* - DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE to only wait for a fence to
* materialize
*
* A callback must be provided that will be invoked when the waiter has finished.
*/

View file

@ -42,9 +42,9 @@ struct wlr_gles2_texture_attribs {
bool has_alpha;
};
bool wlr_renderer_is_gles2(struct wlr_renderer *wlr_renderer);
bool wlr_render_timer_is_gles2(struct wlr_render_timer *timer);
bool wlr_texture_is_gles2(struct wlr_texture *texture);
bool wlr_renderer_is_gles2(const struct wlr_renderer *wlr_renderer);
bool wlr_render_timer_is_gles2(const struct wlr_render_timer *timer);
bool wlr_texture_is_gles2(const struct wlr_texture *texture);
void wlr_gles2_texture_get_attribs(struct wlr_texture *texture,
struct wlr_gles2_texture_attribs *attribs);

View file

@ -14,8 +14,8 @@
struct wlr_renderer *wlr_pixman_renderer_create(void);
bool wlr_renderer_is_pixman(struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_pixman(struct wlr_texture *texture);
bool wlr_renderer_is_pixman(const struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_pixman(const struct wlr_texture *texture);
pixman_image_t *wlr_pixman_renderer_get_buffer_image(
struct wlr_renderer *wlr_renderer, struct wlr_buffer *wlr_buffer);

View file

@ -25,8 +25,8 @@ VkPhysicalDevice wlr_vk_renderer_get_physical_device(struct wlr_renderer *render
VkDevice wlr_vk_renderer_get_device(struct wlr_renderer *renderer);
uint32_t wlr_vk_renderer_get_queue_family(struct wlr_renderer *renderer);
bool wlr_renderer_is_vk(struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_vk(struct wlr_texture *texture);
bool wlr_renderer_is_vk(const struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_vk(const struct wlr_texture *texture);
void wlr_vk_texture_get_image_attribs(struct wlr_texture *texture,
struct wlr_vk_image_attribs *attribs);

View file

@ -37,6 +37,8 @@ struct wlr_texture_read_pixels_options {
uint32_t dst_x, dst_y;
/** Source box of the texture to read from. If empty, the full texture is assumed. */
const struct wlr_box src_box;
struct wlr_drm_syncobj_timeline *wait_timeline;
uint64_t wait_point;
};
bool wlr_texture_read_pixels(struct wlr_texture *texture,

View file

@ -0,0 +1,50 @@
/*
* This an unstable interface of wlroots. No guarantees are made regarding the
* future consistency of this API.
*/
#ifndef WLR_USE_UNSTABLE
#error "Add -DWLR_USE_UNSTABLE to enable unstable wlroots features"
#endif
#ifndef WLR_TYPES_WLR_EXT_BACKGROUND_EFFECT_V1_H
#define WLR_TYPES_WLR_EXT_BACKGROUND_EFFECT_V1_H
#include <pixman.h>
#include <wayland-server-core.h>
#include <wayland-protocols/ext-background-effect-v1-enum.h>
struct wlr_surface;
struct wlr_ext_background_effect_surface_v1_state {
pixman_region32_t blur_region;
};
struct wlr_ext_background_effect_manager_v1 {
struct wl_global *global;
uint32_t capabilities; // bitmask of enum ext_background_effect_manager_v1_capability
struct {
struct wl_signal destroy;
} events;
void *data;
struct {
struct wl_list resources; // wl_resource_get_link()
struct wl_listener display_destroy;
} WLR_PRIVATE;
};
struct wlr_ext_background_effect_manager_v1 *wlr_ext_background_effect_manager_v1_create(
struct wl_display *display, uint32_t version, uint32_t capabilities);
/**
* Get the committed background effect state for a surface.
*
* Returns NULL if the client has not attached a background effect object to
* the surface.
*/
const struct wlr_ext_background_effect_surface_v1_state *
wlr_ext_background_effect_v1_get_surface_state(struct wlr_surface *surface);
#endif

View file

@ -91,7 +91,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 {
struct {
struct wl_signal destroy;
struct wl_signal new_request; // struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request
struct wl_signal capture_request; // struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event
} events;
struct {
@ -99,7 +99,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 {
} WLR_PRIVATE;
};
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request {
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event {
struct wlr_ext_foreign_toplevel_handle_v1 *toplevel_handle;
struct wl_client *client;
@ -124,7 +124,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 *
wlr_ext_foreign_toplevel_image_capture_source_manager_v1_create(struct wl_display *display, uint32_t version);
bool wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_accept(
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request,
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event *request,
struct wlr_ext_image_capture_source_v1 *source);
struct wlr_ext_image_capture_source_v1 *wlr_ext_image_capture_source_v1_create_with_scene_node(

View file

@ -19,8 +19,12 @@ struct wlr_linux_drm_syncobj_surface_v1_state {
struct wlr_drm_syncobj_timeline *acquire_timeline;
uint64_t acquire_point;
struct wlr_drm_syncobj_timeline *release_timeline;
uint64_t release_point;
struct {
bool committed;
struct wlr_drm_syncobj_timeline *release_timeline;
uint64_t release_point;
struct wlr_drm_syncobj_merger *release_merger;
} WLR_PRIVATE;
};
struct wlr_linux_drm_syncobj_manager_v1 {
@ -55,4 +59,36 @@ struct wlr_linux_drm_syncobj_surface_v1_state *wlr_linux_drm_syncobj_v1_get_surf
bool wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer(
struct wlr_linux_drm_syncobj_surface_v1_state *state, struct wlr_buffer *buffer);
/**
* Register a release point for buffer usage.
*
* This function may be called multiple times for the same commit. The client's
* release point will be signalled when all registered points are signalled, and
* a new buffer has been committed.
*
* Because the given release point may not be materialized, a wl_event_loop must
* be supplied to schedule a wait internally, if needed
*/
bool wlr_linux_drm_syncobj_v1_state_add_release_point(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_drm_syncobj_timeline *release_timeline, uint64_t release_point,
struct wl_event_loop *event_loop);
/**
* Register the DMA-BUF release of a buffer for buffer usage.
* Non-dmabuf buffers are considered to be immediately available (no wait).
*
* This function may be called multiple times for the same commit. The client's
* release point will be signalled when all registered points are signalled, and
* a new buffer has been committed.
*
* Because the platform may not support DMA-BUF fence merges, a wl_event_loop
* must be supplied to schedule a wait internally, if needed
*
* Waits for write access
*/
bool wlr_linux_drm_syncobj_v1_state_add_release_from_implicit_sync(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop);
#endif

View file

@ -77,6 +77,7 @@ enum wlr_output_state_field {
WLR_OUTPUT_STATE_SIGNAL_TIMELINE = 1 << 11,
WLR_OUTPUT_STATE_COLOR_TRANSFORM = 1 << 12,
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION = 1 << 13,
WLR_OUTPUT_STATE_COLOR_REPRESENTATION = 1 << 14,
};
enum wlr_output_state_mode_type {
@ -142,6 +143,10 @@ struct wlr_output_state {
* regular page-flip at the next wlr_output.frame event. */
bool tearing_page_flip;
// Set if (committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION)
enum wlr_color_encoding color_encoding;
enum wlr_color_range color_range;
enum wlr_output_state_mode_type mode_type;
struct wlr_output_mode *mode;
struct {
@ -205,6 +210,8 @@ struct wlr_output {
enum wl_output_transform transform;
enum wlr_output_adaptive_sync_status adaptive_sync_status;
uint32_t render_format;
enum wlr_color_encoding color_encoding;
enum wlr_color_range color_range;
const struct wlr_output_image_description *image_description;
// Indicates whether making changes to adaptive sync status is supported.
@ -625,6 +632,15 @@ void wlr_output_state_set_color_transform(struct wlr_output_state *state,
bool wlr_output_state_set_image_description(struct wlr_output_state *state,
const struct wlr_output_image_description *image_desc);
/**
* Set the color encoding and range of the primary scanout buffer.
*
* Pass WLR_COLOR_ENCODING_NONE / WLR_COLOR_RANGE_NONE to reset to defaults.
*/
void wlr_output_state_set_color_encoding_and_range(
struct wlr_output_state *state,
enum wlr_color_encoding encoding, enum wlr_color_range range);
/**
* Copies the output state from src to dst. It is safe to then
* wlr_output_state_finish() src and have dst still be valid.

View file

@ -131,8 +131,6 @@ struct wlr_scene_surface {
struct wlr_addon addon;
struct wl_listener outputs_update;
struct wl_listener output_enter;
struct wl_listener output_leave;
struct wl_listener output_sample;
struct wl_listener frame_done;
struct wl_listener surface_destroy;
@ -155,6 +153,8 @@ struct wlr_scene_outputs_update_event {
struct wlr_scene_output_sample_event {
struct wlr_scene_output *output;
bool direct_scanout;
struct wlr_drm_syncobj_timeline *release_timeline;
uint64_t release_point;
};
struct wlr_scene_frame_done_event {
@ -171,8 +171,6 @@ struct wlr_scene_buffer {
struct {
struct wl_signal outputs_update; // struct wlr_scene_outputs_update_event
struct wl_signal output_enter; // struct wlr_scene_output
struct wl_signal output_leave; // struct wlr_scene_output
struct wl_signal output_sample; // struct wlr_scene_output_sample_event
struct wl_signal frame_done; // struct wlr_scene_frame_done_event
} events;
@ -268,6 +266,8 @@ struct wlr_scene_output {
struct wlr_drm_syncobj_timeline *in_timeline;
uint64_t in_point;
struct wlr_drm_syncobj_timeline *out_timeline;
uint64_t out_point;
} WLR_PRIVATE;
};

View file

@ -33,12 +33,27 @@ struct wlr_virtual_keyboard_v1 {
bool has_keymap;
struct wl_list link; // wlr_virtual_keyboard_manager_v1.virtual_keyboards
struct {
struct wl_listener seat_destroy;
} WLR_PRIVATE;
};
struct wlr_virtual_keyboard_manager_v1* wlr_virtual_keyboard_manager_v1_create(
struct wl_display *display);
/**
* Get the struct wlr_virtual_keyboard_v1 corresponding to a zwp_virtual_keyboard_v1 resource.
*
* Asserts that the resource is a valid zwp_virtual_keyboard_v1 resource created by wlroots.
*
* Returns NULL if the resource is inert.
*/
struct wlr_virtual_keyboard_v1 *wlr_virtual_keyboard_v1_from_resource(
struct wl_resource *resource);
struct wlr_virtual_keyboard_v1 *wlr_input_device_get_virtual_keyboard(
struct wlr_input_device *wlr_dev);
#endif

View file

@ -107,6 +107,13 @@ void wlr_fbox_transform(struct wlr_fbox *dest, const struct wlr_fbox *box,
#ifdef WLR_USE_UNSTABLE
/**
* Checks whether two boxes intersect.
*
* Returns false if either box is empty.
*/
bool wlr_box_intersects(const struct wlr_box *a, const struct wlr_box *b);
/**
* Returns true if the two boxes are equal, false otherwise.
*/

View file

@ -82,9 +82,9 @@ void xwm_handle_selection_notify(struct wlr_xwm *xwm,
xcb_selection_notify_event_t *event);
int xwm_handle_xfixes_selection_notify(struct wlr_xwm *xwm,
xcb_xfixes_selection_notify_event_t *event);
bool data_source_is_xwayland(struct wlr_data_source *wlr_source);
bool data_source_is_xwayland(const struct wlr_data_source *wlr_source);
bool primary_selection_source_is_xwayland(
struct wlr_primary_selection_source *wlr_source);
const struct wlr_primary_selection_source *wlr_source);
void xwm_seat_handle_start_drag(struct wlr_xwm *xwm, struct wlr_drag *drag);

View file

@ -1,7 +1,7 @@
project(
'wlroots',
'c',
version: '0.20.0-rc4',
version: '0.21.0-dev',
license: 'MIT',
meson_version: '>=1.3',
default_options: [
@ -178,6 +178,10 @@ if get_option('examples')
subdir('tinywl')
endif
if get_option('tests')
subdir('test')
endif
pkgconfig = import('pkgconfig')
pkgconfig.generate(
lib_wlr,

View file

@ -7,5 +7,6 @@ option('backends', type: 'array', choices: ['auto', 'drm', 'libinput', 'x11'], v
option('allocators', type: 'array', choices: ['auto', 'gbm', 'udmabuf'], value: ['auto'],
description: 'Select built-in allocators')
option('session', type: 'feature', value: 'auto', description: 'Enable session support')
option('tests', type: 'boolean', value: true, description: 'Build tests and benchmarks')
option('color-management', type: 'feature', value: 'auto', description: 'Enable support for color management')
option('libliftoff', type: 'feature', value: 'auto', description: 'Enable support for libliftoff')

View file

@ -30,6 +30,7 @@ protocols = {
'content-type-v1': wl_protocol_dir / 'staging/content-type/content-type-v1.xml',
'cursor-shape-v1': wl_protocol_dir / 'staging/cursor-shape/cursor-shape-v1.xml',
'drm-lease-v1': wl_protocol_dir / 'staging/drm-lease/drm-lease-v1.xml',
'ext-background-effect-v1': wl_protocol_dir / 'staging/ext-background-effect/ext-background-effect-v1.xml',
'ext-foreign-toplevel-list-v1': wl_protocol_dir / 'staging/ext-foreign-toplevel-list/ext-foreign-toplevel-list-v1.xml',
'ext-idle-notify-v1': wl_protocol_dir / 'staging/ext-idle-notify/ext-idle-notify-v1.xml',
'ext-image-capture-source-v1': wl_protocol_dir / 'staging/ext-image-capture-source/ext-image-capture-source-v1.xml',

View file

@ -192,7 +192,7 @@
<request name="destroy" type="destructor">
<description summary="delete this object, used or not">
Unreferences the frame. This request must be called as soon as its no
Unreferences the frame. This request must be called as soon as it's no
longer used.
It can be called at any time by the client. The client will still have

View file

@ -56,7 +56,8 @@ bool dmabuf_import_sync_file(int dmabuf_fd, uint32_t flags, int sync_file_fd) {
.fd = sync_file_fd,
};
if (drmIoctl(dmabuf_fd, DMA_BUF_IOCTL_IMPORT_SYNC_FILE, &data) != 0) {
wlr_log_errno(WLR_ERROR, "drmIoctl(IMPORT_SYNC_FILE) failed");
enum wlr_log_importance importance = errno == ENOTTY ? WLR_DEBUG : WLR_ERROR;
wlr_log_errno(importance, "drmIoctl(IMPORT_SYNC_FILE) failed");
return false;
}
return true;
@ -68,7 +69,8 @@ int dmabuf_export_sync_file(int dmabuf_fd, uint32_t flags) {
.fd = -1,
};
if (drmIoctl(dmabuf_fd, DMA_BUF_IOCTL_EXPORT_SYNC_FILE, &data) != 0) {
wlr_log_errno(WLR_ERROR, "drmIoctl(EXPORT_SYNC_FILE) failed");
enum wlr_log_importance importance = errno == ENOTTY ? WLR_DEBUG : WLR_ERROR;
wlr_log_errno(importance, "drmIoctl(EXPORT_SYNC_FILE) failed");
return -1;
}
return data.fd;

View file

@ -177,6 +177,14 @@ bool wlr_drm_syncobj_timeline_check(struct wlr_drm_syncobj_timeline *timeline,
return true;
}
bool wlr_drm_syncobj_timeline_signal(struct wlr_drm_syncobj_timeline *timeline, uint64_t point) {
if (drmSyncobjTimelineSignal(timeline->drm_fd, &timeline->handle, &point, 1) != 0) {
wlr_log(WLR_ERROR, "drmSyncobjTimelineSignal() failed");
return false;
}
return true;
}
static int handle_eventfd_ready(int ev_fd, uint32_t mask, void *data) {
struct wlr_drm_syncobj_timeline_waiter *waiter = data;
@ -214,14 +222,8 @@ bool wlr_drm_syncobj_timeline_waiter_init(struct wlr_drm_syncobj_timeline_waiter
return false;
}
struct drm_syncobj_eventfd syncobj_eventfd = {
.handle = timeline->handle,
.flags = flags,
.point = point,
.fd = ev_fd,
};
if (drmIoctl(timeline->drm_fd, DRM_IOCTL_SYNCOBJ_EVENTFD, &syncobj_eventfd) != 0) {
wlr_log_errno(WLR_ERROR, "DRM_IOCTL_SYNCOBJ_EVENTFD failed");
if (drmSyncobjEventfd(timeline->drm_fd, timeline->handle, point, ev_fd, flags) != 0) {
wlr_log_errno(WLR_ERROR, "drmSyncobjEventfd() failed");
close(ev_fd);
return false;
}

194
render/drm_syncobj_merger.c Normal file
View file

@ -0,0 +1,194 @@
#include <assert.h>
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
#include <wayland-util.h>
#include <wlr/render/drm_syncobj.h>
#include <wlr/types/wlr_buffer.h>
#include <wlr/util/log.h>
#include <xf86drm.h>
#include "render/dmabuf.h"
#include "render/drm_syncobj_merger.h"
#include "config.h"
#if HAVE_LINUX_SYNC_FILE
#include <linux/sync_file.h>
#include <sys/ioctl.h>
static int sync_file_merge(int fd1, int fd2) {
// The kernel will automatically prune signalled fences
struct sync_merge_data merge_data = { .fd2 = fd2 };
if (ioctl(fd1, SYNC_IOC_MERGE, &merge_data) < 0) {
wlr_log_errno(WLR_ERROR, "ioctl(SYNC_IOC_MERGE) failed");
return -1;
}
return merge_data.fence;
}
#else
static int sync_file_merge(int fd1, int fd2) {
wlr_log(WLR_ERROR, "sync_file support is unavailable");
return -1;
}
#endif
struct wlr_drm_syncobj_merger *wlr_drm_syncobj_merger_create(
struct wlr_drm_syncobj_timeline *dst_timeline, uint64_t dst_point) {
struct wlr_drm_syncobj_merger *merger = calloc(1, sizeof(*merger));
if (merger == NULL) {
return NULL;
}
merger->n_ref = 1;
merger->dst_timeline = wlr_drm_syncobj_timeline_ref(dst_timeline);
merger->dst_point = dst_point;
merger->sync_fd = -1;
return merger;
}
struct wlr_drm_syncobj_merger *wlr_drm_syncobj_merger_ref(
struct wlr_drm_syncobj_merger *merger) {
assert(merger->n_ref > 0);
merger->n_ref++;
return merger;
}
void wlr_drm_syncobj_merger_unref(struct wlr_drm_syncobj_merger *merger) {
if (merger == NULL) {
return;
}
assert(merger->n_ref > 0);
merger->n_ref--;
if (merger->n_ref > 0) {
return;
}
if (merger->sync_fd != -1) {
wlr_drm_syncobj_timeline_import_sync_file(merger->dst_timeline,
merger->dst_point, merger->sync_fd);
close(merger->sync_fd);
} else {
wlr_drm_syncobj_timeline_signal(merger->dst_timeline, merger->dst_point);
}
wlr_drm_syncobj_timeline_unref(merger->dst_timeline);
free(merger);
}
static bool merger_add_exportable(struct wlr_drm_syncobj_merger *merger,
struct wlr_drm_syncobj_timeline *src_timeline, uint64_t src_point) {
int new_sync = wlr_drm_syncobj_timeline_export_sync_file(src_timeline, src_point);
return wlr_drm_syncobj_merger_add_sync_file(merger, new_sync);
}
struct export_waiter {
struct wlr_drm_syncobj_timeline_waiter waiter;
struct wlr_drm_syncobj_merger *merger;
struct wlr_drm_syncobj_timeline *src_timeline;
uint64_t src_point;
};
static void export_waiter_handle_ready(struct wlr_drm_syncobj_timeline_waiter *waiter) {
struct export_waiter *add = wl_container_of(waiter, add, waiter);
merger_add_exportable(add->merger, add->src_timeline, add->src_point);
wlr_drm_syncobj_merger_unref(add->merger);
wlr_drm_syncobj_timeline_unref(add->src_timeline);
wlr_drm_syncobj_timeline_waiter_finish(&add->waiter);
free(add);
}
bool wlr_drm_syncobj_merger_add(struct wlr_drm_syncobj_merger *merger,
struct wlr_drm_syncobj_timeline *src_timeline, uint64_t src_point,
struct wl_event_loop *loop) {
assert(loop != NULL);
bool exportable = false;
int flags = DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE;
if (!wlr_drm_syncobj_timeline_check(src_timeline, src_point, flags, &exportable)) {
return false;
}
if (exportable) {
return merger_add_exportable(merger, src_timeline, src_point);
}
struct export_waiter *add = calloc(1, sizeof(*add));
if (add == NULL) {
return false;
}
if (!wlr_drm_syncobj_timeline_waiter_init(&add->waiter, src_timeline, src_point,
flags, loop, export_waiter_handle_ready)) {
return false;
}
add->merger = merger;
add->src_timeline = wlr_drm_syncobj_timeline_ref(src_timeline);
add->src_point = src_point;
merger->n_ref++;
return true;
}
bool wlr_drm_syncobj_merger_add_sync_file(struct wlr_drm_syncobj_merger *merger,
int fd) {
int new_sync = fd;
if (merger->sync_fd != -1) {
new_sync = sync_file_merge(merger->sync_fd, fd);
close(fd);
close(merger->sync_fd);
}
merger->sync_fd = new_sync;
return merger->sync_fd != -1;
}
struct poll_waiter {
struct wl_event_source *event_source;
struct wlr_drm_syncobj_merger *merger;
};
static int poll_waiter_handle_done(int fd, uint32_t mask, void *data) {
struct poll_waiter *waiter = data;
wlr_drm_syncobj_merger_unref(waiter->merger);
wl_event_source_remove(waiter->event_source);
free(waiter);
return 0;
}
bool wlr_drm_syncobj_merger_add_dmabuf(struct wlr_drm_syncobj_merger *merger,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop) {
struct wlr_dmabuf_attributes dmabuf_attributes;
if (!wlr_buffer_get_dmabuf(buffer, &dmabuf_attributes)) {
return true;
}
bool res = true;
for (int i = 0; i < dmabuf_attributes.n_planes; ++i) {
int sync_fd = dmabuf_export_sync_file(dmabuf_attributes.fd[i], DMA_BUF_SYNC_WRITE);
if (sync_fd == -1) {
res = false;
break;
}
if (!wlr_drm_syncobj_merger_add_sync_file(merger, sync_fd)) {
return false;
}
}
if (res) {
return true;
}
uint32_t mask = WL_EVENT_ERROR | WL_EVENT_HANGUP | WL_EVENT_WRITABLE;
for (int i = 0; i < dmabuf_attributes.n_planes; ++i) {
struct poll_waiter *waiter = calloc(1, sizeof(*waiter));
if (waiter == NULL) {
return false;
}
waiter->merger = wlr_drm_syncobj_merger_ref(merger);
waiter->event_source = wl_event_loop_add_fd(event_loop,
dmabuf_attributes.fd[i], mask, poll_waiter_handle_done, waiter);
if (waiter->event_source == NULL) {
wlr_drm_syncobj_merger_unref(waiter->merger);
free(waiter);
return false;
}
}
return true;
}

View file

@ -29,7 +29,7 @@
static const struct wlr_renderer_impl renderer_impl;
static const struct wlr_render_timer_impl render_timer_impl;
bool wlr_renderer_is_gles2(struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_gles2(const struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -40,7 +40,7 @@ struct wlr_gles2_renderer *gles2_get_renderer(
return renderer;
}
bool wlr_render_timer_is_gles2(struct wlr_render_timer *timer) {
bool wlr_render_timer_is_gles2(const struct wlr_render_timer *timer) {
return timer->impl == &render_timer_impl;
}

View file

@ -4,8 +4,10 @@
#include <GLES2/gl2ext.h>
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
#include <wayland-server-protocol.h>
#include <wayland-util.h>
#include <wlr/render/drm_syncobj.h>
#include <wlr/render/egl.h>
#include <wlr/render/interface.h>
#include <wlr/render/wlr_texture.h>
@ -16,7 +18,7 @@
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_gles2(struct wlr_texture *wlr_texture) {
bool wlr_texture_is_gles2(const struct wlr_texture *wlr_texture) {
return wlr_texture->impl == &texture_impl;
}
@ -201,6 +203,27 @@ static bool gles2_texture_read_pixels(struct wlr_texture *wlr_texture,
return false;
}
if (options->wait_timeline != NULL) {
int sync_file_fd =
wlr_drm_syncobj_timeline_export_sync_file(options->wait_timeline, options->wait_point);
if (sync_file_fd < 0) {
return false;
}
struct wlr_gles2_renderer *renderer = texture->renderer;
EGLSyncKHR sync = wlr_egl_create_sync(renderer->egl, sync_file_fd);
close(sync_file_fd);
if (sync == EGL_NO_SYNC_KHR) {
return false;
}
bool ok = wlr_egl_wait_sync(renderer->egl, sync);
wlr_egl_destroy_sync(renderer->egl, sync);
if (!ok) {
return false;
}
}
// Make sure any pending drawing is finished before we try to read it
glFinish();

View file

@ -9,6 +9,7 @@ wlr_files += files(
'color.c',
'dmabuf.c',
'drm_format_set.c',
'drm_syncobj_merger.c',
'drm_syncobj.c',
'pass.c',
'pixel_format.c',
@ -28,6 +29,7 @@ else
endif
internal_config.set10('HAVE_EVENTFD', cc.has_header('sys/eventfd.h'))
internal_config.set10('HAVE_LINUX_SYNC_FILE', cc.has_header('linux/sync_file.h'))
if 'gles2' in renderers or 'auto' in renderers
egl = dependency('egl', required: 'gles2' in renderers)

View file

@ -159,6 +159,7 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
switch (options->filter_mode) {
case WLR_SCALE_FILTER_BILINEAR:
pixman_image_set_repeat(texture->image, PIXMAN_REPEAT_PAD);
pixman_image_set_filter(texture->image, PIXMAN_FILTER_BILINEAR, NULL, 0);
break;
case WLR_SCALE_FILTER_NEAREST:

View file

@ -12,7 +12,7 @@
static const struct wlr_renderer_impl renderer_impl;
bool wlr_renderer_is_pixman(struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_pixman(const struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -69,7 +69,7 @@ static struct wlr_pixman_buffer *get_buffer(
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_pixman(struct wlr_texture *texture) {
bool wlr_texture_is_pixman(const struct wlr_texture *texture) {
return texture->impl == &texture_impl;
}

View file

@ -2,7 +2,9 @@
#include <drm_fourcc.h>
#include <stdlib.h>
#include <unistd.h>
#include <wlr/util/box.h>
#include <wlr/util/log.h>
#include <wlr/util/transform.h>
#include <wlr/render/color.h>
#include <wlr/render/drm_syncobj.h>
@ -38,17 +40,6 @@ static void bind_pipeline(struct wlr_vk_render_pass *pass, VkPipeline pipeline)
pass->bound_pipeline = pipeline;
}
static void get_clip_region(struct wlr_vk_render_pass *pass,
const pixman_region32_t *in, pixman_region32_t *out) {
if (in != NULL) {
pixman_region32_init(out);
pixman_region32_copy(out, in);
} else {
struct wlr_buffer *buffer = pass->render_buffer->wlr_buffer;
pixman_region32_init_rect(out, 0, 0, buffer->width, buffer->height);
}
}
static void convert_pixman_box_to_vk_rect(const pixman_box32_t *box, VkRect2D *rect) {
*rect = (VkRect2D){
.offset = { .x = box->x1, .y = box->y1 },
@ -99,53 +90,6 @@ static void render_pass_destroy(struct wlr_vk_render_pass *pass) {
free(pass);
}
static VkSemaphore render_pass_wait_sync_file(struct wlr_vk_render_pass *pass,
size_t sem_index, int sync_file_fd) {
struct wlr_vk_renderer *renderer = pass->renderer;
struct wlr_vk_command_buffer *render_cb = pass->command_buffer;
VkResult res;
VkSemaphore *wait_semaphores = render_cb->wait_semaphores.data;
size_t wait_semaphores_len = render_cb->wait_semaphores.size / sizeof(wait_semaphores[0]);
VkSemaphore *sem_ptr;
if (sem_index >= wait_semaphores_len) {
sem_ptr = wl_array_add(&render_cb->wait_semaphores, sizeof(*sem_ptr));
if (sem_ptr == NULL) {
return VK_NULL_HANDLE;
}
*sem_ptr = VK_NULL_HANDLE;
} else {
sem_ptr = &wait_semaphores[sem_index];
}
if (*sem_ptr == VK_NULL_HANDLE) {
VkSemaphoreCreateInfo semaphore_info = {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
};
res = vkCreateSemaphore(renderer->dev->dev, &semaphore_info, NULL, sem_ptr);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSemaphore", res);
return VK_NULL_HANDLE;
}
}
VkImportSemaphoreFdInfoKHR import_info = {
.sType = VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_FD_INFO_KHR,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
.flags = VK_SEMAPHORE_IMPORT_TEMPORARY_BIT,
.semaphore = *sem_ptr,
.fd = sync_file_fd,
};
res = renderer->dev->api.vkImportSemaphoreFdKHR(renderer->dev->dev, &import_info);
if (res != VK_SUCCESS) {
wlr_vk_error("vkImportSemaphoreFdKHR", res);
return VK_NULL_HANDLE;
}
return *sem_ptr;
}
static bool render_pass_wait_render_buffer(struct wlr_vk_render_pass *pass,
VkSemaphoreSubmitInfoKHR *render_wait, uint32_t *render_wait_len_ptr) {
int sync_file_fds[WLR_DMABUF_MAX_PLANES];
@ -162,7 +106,8 @@ static bool render_pass_wait_render_buffer(struct wlr_vk_render_pass *pass,
continue;
}
VkSemaphore sem = render_pass_wait_sync_file(pass, *render_wait_len_ptr, sync_file_fds[i]);
VkSemaphore sem = vulkan_command_buffer_wait_sync_file(pass->renderer,
pass->command_buffer, *render_wait_len_ptr, sync_file_fds[i]);
if (sem == VK_NULL_HANDLE) {
close(sync_file_fds[i]);
continue;
@ -248,11 +193,13 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
int width = pass->render_buffer->wlr_buffer->width;
int height = pass->render_buffer->wlr_buffer->height;
float final_matrix[9] = {
width, 0, -1,
0, height, -1,
0, 0, 0,
};
struct wlr_box output_box = { 0, 0, width, height };
float proj[9], final_matrix[9];
wlr_matrix_identity(proj);
wlr_matrix_project_box(final_matrix, &output_box,
WL_OUTPUT_TRANSFORM_NORMAL, proj);
wlr_matrix_multiply(final_matrix, pass->projection, final_matrix);
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = { 0, 0 },
.uv_size = { 1, 1 },
@ -331,16 +278,38 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(
clip, &clip_rects_len);
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(render_cb->vk, 0, 1, &rect);
vkCmdDraw(render_cb->vk, 4, 1, 0, 0);
if (clip_rects_len > 0) {
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pass->failed = true;
goto error;
}
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *b = &clip_rects[i];
instance_data[i * 4 + 0] = (float)b->x1 / width;
instance_data[i * 4 + 1] = (float)b->y1 / height;
instance_data[i * 4 + 2] = (float)(b->x2 - b->x1) / width;
instance_data[i * 4 + 3] = (float)(b->y2 - b->y1) / height;
}
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(render_cb->vk, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(render_cb->vk, 4, clip_rects_len, 0, 0);
}
}
vkCmdEndRenderPass(render_cb->vk);
if (pass->timer != NULL) {
vkCmdWriteTimestamp(render_cb->vk, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
pass->timer->query_pool, 1);
}
size_t pass_textures_len = pass->textures.size / sizeof(struct wlr_vk_render_pass_texture);
size_t render_wait_cap = (1 + pass_textures_len) * WLR_DMABUF_MAX_PLANES;
render_wait = calloc(render_wait_cap, sizeof(*render_wait));
@ -431,7 +400,8 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
continue;
}
VkSemaphore sem = render_pass_wait_sync_file(pass, render_wait_len, sync_file_fds[i]);
VkSemaphore sem = vulkan_command_buffer_wait_sync_file(renderer, render_cb,
render_wait_len, sync_file_fds[i]);
if (sem == VK_NULL_HANDLE) {
close(sync_file_fds[i]);
continue;
@ -635,14 +605,7 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
free(render_wait);
struct wlr_vk_shared_buffer *stage_buf, *stage_buf_tmp;
wl_list_for_each_safe(stage_buf, stage_buf_tmp, &renderer->stage.buffers, link) {
if (stage_buf->allocs.size == 0) {
continue;
}
wl_list_remove(&stage_buf->link);
wl_list_insert(&stage_cb->stage_buffers, &stage_buf->link);
}
vulkan_stage_mark_submit(renderer, render_timeline_point);
if (!vulkan_sync_render_pass_release(renderer, pass)) {
wlr_log(WLR_ERROR, "Failed to sync render buffer");
@ -667,18 +630,11 @@ error:
}
static void render_pass_mark_box_updated(struct wlr_vk_render_pass *pass,
const struct wlr_box *box) {
const pixman_box32_t *box) {
if (!pass->two_pass) {
return;
}
pixman_box32_t pixman_box = {
.x1 = box->x,
.x2 = box->x + box->width,
.y1 = box->y,
.y2 = box->y + box->height,
};
rect_union_add(&pass->updated_region, pixman_box);
rect_union_add(&pass->updated_region, box);
}
static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
@ -698,29 +654,26 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
options->color.a, // no conversion for alpha
};
struct wlr_box box;
wlr_render_rect_options_get_box(options, pass->render_buffer->wlr_buffer, &box);
pixman_region32_t clip;
get_clip_region(pass, options->clip, &clip);
if (options->clip) {
pixman_region32_init(&clip);
pixman_region32_intersect_rect(&clip, options->clip,
box.x, box.y, box.width, box.height);
} else {
pixman_region32_init_rect(&clip,
box.x, box.y, box.width, box.height);
}
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
// Record regions possibly updated for use in second subpass
for (int i = 0; i < clip_rects_len; i++) {
struct wlr_box clip_box = {
.x = clip_rects[i].x1,
.y = clip_rects[i].y1,
.width = clip_rects[i].x2 - clip_rects[i].x1,
.height = clip_rects[i].y2 - clip_rects[i].y1,
};
struct wlr_box intersection;
if (!wlr_box_intersection(&intersection, &options->box, &clip_box)) {
continue;
}
render_pass_mark_box_updated(pass, &intersection);
if (clip_rects_len == 0) {
pixman_region32_fini(&clip);
return;
}
struct wlr_box box;
wlr_render_rect_options_get_box(options, pass->render_buffer->wlr_buffer, &box);
switch (options->blend_mode) {
case WLR_RENDER_BLEND_MODE_PREMULTIPLIED:;
float proj[9], matrix[9];
@ -739,6 +692,23 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
break;
}
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(pass->renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pass->failed = true;
break;
}
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
instance_data[i * 4 + 0] = (float)(rect->x1 - box.x) / box.width;
instance_data[i * 4 + 1] = (float)(rect->y1 - box.y) / box.height;
instance_data[i * 4 + 2] = (float)(rect->x2 - rect->x1) / box.width;
instance_data[i * 4 + 3] = (float)(rect->y2 - rect->y1) / box.height;
}
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = { 0, 0 },
.uv_size = { 1, 1 },
@ -752,12 +722,9 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
VK_SHADER_STAGE_FRAGMENT_BIT, sizeof(vert_pcr_data), sizeof(float) * 4,
linear_color);
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(cb, 0, 1, &rect);
vkCmdDraw(cb, 4, 1, 0, 0);
}
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(cb, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(cb, 4, clip_rects_len, 0, 0);
break;
case WLR_RENDER_BLEND_MODE_NONE:;
VkClearAttachment clear_att = {
@ -774,7 +741,9 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
.layerCount = 1,
};
for (int i = 0; i < clip_rects_len; i++) {
convert_pixman_box_to_vk_rect(&clip_rects[i], &clear_rect.rect);
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
convert_pixman_box_to_vk_rect(rect, &clear_rect.rect);
vkCmdClearAttachments(cb, 1, &clear_att, 1, &clear_rect);
}
break;
@ -816,6 +785,31 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
wlr_matrix_project_box(matrix, &dst_box, options->transform, proj);
wlr_matrix_multiply(matrix, pass->projection, matrix);
pixman_region32_t clip;
if (options->clip) {
pixman_region32_init(&clip);
pixman_region32_intersect_rect(&clip, options->clip,
dst_box.x, dst_box.y, dst_box.width, dst_box.height);
} else {
pixman_region32_init_rect(&clip,
dst_box.x, dst_box.y, dst_box.width, dst_box.height);
}
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
if (clip_rects_len == 0) {
pixman_region32_fini(&clip);
return;
}
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = {
src_box.x / options->texture->width,
@ -886,6 +880,7 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
WLR_RENDER_BLEND_MODE_NONE : options->blend_mode,
});
if (!pipe) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
@ -893,6 +888,7 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
struct wlr_vk_texture_view *view =
vulkan_texture_get_or_create_view(texture, pipe->layout, srgb_image_view);
if (!view) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
@ -930,34 +926,35 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
VK_SHADER_STAGE_FRAGMENT_BIT, sizeof(vert_pcr_data),
sizeof(frag_pcr_data), &frag_pcr_data);
pixman_region32_t clip;
get_clip_region(pass, options->clip, &clip);
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(cb, 0, 1, &rect);
vkCmdDraw(cb, 4, 1, 0, 0);
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
struct wlr_box clip_box = {
.x = clip_rects[i].x1,
.y = clip_rects[i].y1,
.width = clip_rects[i].x2 - clip_rects[i].x1,
.height = clip_rects[i].y2 - clip_rects[i].y1,
struct wlr_fbox norm = {
.x = (double)(rect->x1 - dst_box.x) / dst_box.width,
.y = (double)(rect->y1 - dst_box.y) / dst_box.height,
.width = (double)(rect->x2 - rect->x1) / dst_box.width,
.height = (double)(rect->y2 - rect->y1) / dst_box.height,
};
struct wlr_box intersection;
if (!wlr_box_intersection(&intersection, &dst_box, &clip_box)) {
continue;
if (options->transform != WL_OUTPUT_TRANSFORM_NORMAL) {
wlr_fbox_transform(&norm, &norm, options->transform, 1.0, 1.0);
}
render_pass_mark_box_updated(pass, &intersection);
instance_data[i * 4 + 0] = (float)norm.x;
instance_data[i * 4 + 1] = (float)norm.y;
instance_data[i * 4 + 2] = (float)norm.width;
instance_data[i * 4 + 3] = (float)norm.height;
}
pixman_region32_fini(&clip);
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(cb, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(cb, 4, clip_rects_len, 0, 0);
texture->last_used_cb = pass->command_buffer;
pixman_region32_fini(&clip);
if (texture->dmabuf_imported || (options != NULL && options->wait_timeline != NULL)) {
struct wlr_vk_render_pass_texture *pass_texture =
wl_array_add(&pass->textures, sizeof(*pass_texture));
@ -1039,7 +1036,7 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
res = vkCreateImage(dev, &img_info, NULL, image);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateImage failed", res);
return NULL;
return false;
}
VkMemoryRequirements mem_reqs = {0};
@ -1096,13 +1093,13 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
size_t size = dim_len * dim_len * dim_len * bytes_per_block;
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
size, bytes_per_block);
if (!span.buffer || span.alloc.size != size) {
if (!span.buffer || span.size != size) {
wlr_log(WLR_ERROR, "Failed to retrieve staging buffer");
goto fail_imageview;
}
float sample_range = 1.0f / (dim_len - 1);
char *map = (char *)span.buffer->cpu_mapping + span.alloc.start;
char *map = (char *)span.buffer->cpu_mapping + span.offset;
float *dst = (float *)map;
for (size_t b_index = 0; b_index < dim_len; b_index++) {
for (size_t g_index = 0; g_index < dim_len; g_index++) {
@ -1132,7 +1129,7 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_ACCESS_TRANSFER_WRITE_BIT);
VkBufferImageCopy copy = {
.bufferOffset = span.alloc.start,
.bufferOffset = span.offset,
.imageExtent.width = dim_len,
.imageExtent.height = dim_len,
.imageExtent.depth = dim_len,
@ -1302,7 +1299,7 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
struct wlr_vk_command_buffer *cb = vulkan_acquire_command_buffer(renderer);
if (cb == NULL) {
free(pass);
render_pass_destroy(pass);
return NULL;
}
@ -1313,7 +1310,7 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
if (res != VK_SUCCESS) {
wlr_vk_error("vkBeginCommandBuffer", res);
vulkan_reset_command_buffer(cb);
free(pass);
render_pass_destroy(pass);
return NULL;
}
@ -1325,6 +1322,14 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT, VK_ACCESS_SHADER_READ_BIT);
}
struct wlr_vk_render_timer *timer = NULL;
if (options != NULL && options->timer != NULL) {
timer = wl_container_of(options->timer, timer, base);
vkCmdResetQueryPool(cb->vk, timer->query_pool, 0, 2);
vkCmdWriteTimestamp(cb->vk, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
timer->query_pool, 0);
}
int width = buffer->wlr_buffer->width;
int height = buffer->wlr_buffer->height;
VkRect2D rect = { .extent = { width, height } };
@ -1343,6 +1348,7 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
.height = height,
.maxDepth = 1,
});
vkCmdSetScissor(cb->vk, 0, 1, &rect);
// matrix_projection() assumes a GL coordinate system so we need
// to pass WL_OUTPUT_TRANSFORM_FLIPPED_180 to adjust it for vulkan.
@ -1353,5 +1359,6 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
pass->render_buffer_out = buffer_out;
pass->render_setup = render_setup;
pass->command_buffer = cb;
pass->timer = timer;
return pass;
}

View file

@ -1,6 +1,5 @@
#include <assert.h>
#include <fcntl.h>
#include <math.h>
#include <poll.h>
#include <stdlib.h>
#include <stdint.h>
@ -8,6 +7,7 @@
#include <unistd.h>
#include <drm_fourcc.h>
#include <vulkan/vulkan.h>
#include <wayland-util.h>
#include <wlr/render/color.h>
#include <wlr/render/interface.h>
#include <wlr/types/wlr_drm.h>
@ -26,11 +26,9 @@
#include "render/vulkan/shaders/texture.frag.h"
#include "render/vulkan/shaders/quad.frag.h"
#include "render/vulkan/shaders/output.frag.h"
#include "types/wlr_buffer.h"
#include "util/time.h"
#include "util/array.h"
// TODO:
// - simplify stage allocation, don't track allocations but use ringbuffer-like
// - use a pipeline cache (not sure when to save though, after every pipeline
// creation?)
// - create pipelines as derivatives of each other
@ -46,7 +44,7 @@ static bool default_debug = true;
static const struct wlr_renderer_impl renderer_impl;
bool wlr_renderer_is_vk(struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_vk(const struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -187,18 +185,13 @@ static void destroy_render_format_setup(struct wlr_vk_renderer *renderer,
free(setup);
}
static void shared_buffer_destroy(struct wlr_vk_renderer *r,
struct wlr_vk_shared_buffer *buffer) {
static void stage_buffer_destroy(struct wlr_vk_renderer *r,
struct wlr_vk_stage_buffer *buffer) {
if (!buffer) {
return;
}
if (buffer->allocs.size > 0) {
wlr_log(WLR_ERROR, "shared_buffer_finish: %zu allocations left",
buffer->allocs.size / sizeof(struct wlr_vk_allocation));
}
wl_array_release(&buffer->allocs);
wl_array_release(&buffer->watermarks);
if (buffer->cpu_mapping) {
vkUnmapMemory(r->dev->dev, buffer->memory);
buffer->cpu_mapping = NULL;
@ -214,75 +207,12 @@ static void shared_buffer_destroy(struct wlr_vk_renderer *r,
free(buffer);
}
struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
VkDeviceSize size, VkDeviceSize alignment) {
// try to find free span
// simple greedy allocation algorithm - should be enough for this usecase
// since all allocations are freed together after the frame
struct wlr_vk_shared_buffer *buf;
wl_list_for_each_reverse(buf, &r->stage.buffers, link) {
VkDeviceSize start = 0u;
if (buf->allocs.size > 0) {
const struct wlr_vk_allocation *allocs = buf->allocs.data;
size_t allocs_len = buf->allocs.size / sizeof(struct wlr_vk_allocation);
const struct wlr_vk_allocation *last = &allocs[allocs_len - 1];
start = last->start + last->size;
}
assert(start <= buf->buf_size);
// ensure the proposed start is a multiple of alignment
start += alignment - 1 - ((start + alignment - 1) % alignment);
if (buf->buf_size - start < size) {
continue;
}
struct wlr_vk_allocation *a = wl_array_add(&buf->allocs, sizeof(*a));
if (a == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
goto error_alloc;
}
*a = (struct wlr_vk_allocation){
.start = start,
.size = size,
};
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.alloc = *a,
};
}
if (size > max_stage_size) {
wlr_log(WLR_ERROR, "cannot vulkan stage buffer: "
"requested size (%zu bytes) exceeds maximum (%zu bytes)",
(size_t)size, (size_t)max_stage_size);
goto error_alloc;
}
// we didn't find a free buffer - create one
// size = clamp(max(size * 2, prev_size * 2), min_size, max_size)
VkDeviceSize bsize = size * 2;
bsize = bsize < min_stage_size ? min_stage_size : bsize;
if (!wl_list_empty(&r->stage.buffers)) {
struct wl_list *last_link = r->stage.buffers.prev;
struct wlr_vk_shared_buffer *prev = wl_container_of(
last_link, prev, link);
VkDeviceSize last_size = 2 * prev->buf_size;
bsize = bsize < last_size ? last_size : bsize;
}
if (bsize > max_stage_size) {
wlr_log(WLR_INFO, "vulkan stage buffers have reached max size");
bsize = max_stage_size;
}
// create buffer
buf = calloc(1, sizeof(*buf));
static struct wlr_vk_stage_buffer *stage_buffer_create(
struct wlr_vk_renderer *r, VkDeviceSize bsize) {
struct wlr_vk_stage_buffer *buf = calloc(1, sizeof(*buf));
if (!buf) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
goto error_alloc;
return NULL;
}
wl_list_init(&buf->link);
@ -292,7 +222,8 @@ struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
.size = bsize,
.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT |
VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
VK_BUFFER_USAGE_TRANSFER_SRC_BIT |
VK_BUFFER_USAGE_VERTEX_BUFFER_BIT,
.sharingMode = VK_SHARING_MODE_EXCLUSIVE,
};
res = vkCreateBuffer(r->dev->dev, &buf_info, NULL, &buf->buffer);
@ -319,7 +250,7 @@ struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
};
res = vkAllocateMemory(r->dev->dev, &mem_info, NULL, &buf->memory);
if (res != VK_SUCCESS) {
wlr_vk_error("vkAllocatorMemory", res);
wlr_vk_error("vkAllocateMemory", res);
goto error;
}
@ -335,34 +266,162 @@ struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
goto error;
}
struct wlr_vk_allocation *a = wl_array_add(&buf->allocs, sizeof(*a));
if (a == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
buf->buf_size = bsize;
return buf;
error:
stage_buffer_destroy(r, buf);
return NULL;
}
void vulkan_stage_buffer_reclaim(struct wlr_vk_stage_buffer *buf,
uint64_t current_point) {
size_t completed = 0;
struct wlr_vk_stage_watermark *mark;
wl_array_for_each(mark, &buf->watermarks) {
if (mark->timeline_point > current_point) {
break;
}
buf->tail = mark->head;
completed++;
}
if (completed > 0) {
completed *= sizeof(struct wlr_vk_stage_watermark);
if (completed == buf->watermarks.size) {
buf->watermarks.size = 0;
} else {
array_remove_at(&buf->watermarks, 0, completed);
}
}
}
VkDeviceSize vulkan_stage_buffer_alloc(struct wlr_vk_stage_buffer *buf,
VkDeviceSize size, VkDeviceSize alignment) {
VkDeviceSize head = buf->head;
// Round up to the next multiple of alignment
VkDeviceSize rem = head % alignment;
if (rem != 0) {
head += alignment - rem;
}
VkDeviceSize end = head >= buf->tail ? buf->buf_size : buf->tail;
if (head + size < end) {
// Regular allocation head till end of available space
buf->head = head + size;
return head;
} else if (size < buf->tail && head >= buf->tail) {
// First allocation after wrap-around
buf->head = size;
return 0;
}
return (VkDeviceSize)-1;
}
struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
VkDeviceSize size, VkDeviceSize alignment) {
if (size >= max_stage_size) {
wlr_log(WLR_ERROR, "cannot allocate stage buffer: "
"requested size (%zu bytes) exceeds maximum (%zu bytes)",
(size_t)size, (size_t)max_stage_size-1);
goto error;
}
buf->buf_size = bsize;
wl_list_insert(&r->stage.buffers, &buf->link);
VkDeviceSize max_buf_size = min_stage_size / 2;
struct wlr_vk_stage_buffer *buf;
wl_list_for_each(buf, &r->stage.buffers, link) {
VkDeviceSize offset = vulkan_stage_buffer_alloc(buf, size, alignment);
if (offset != (VkDeviceSize)-1) {
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.offset = offset,
.size = size,
};
}
if (buf->buf_size > max_buf_size) {
max_buf_size = buf->buf_size;
}
}
VkDeviceSize bsize = max_buf_size * 2;
while (size * 2 > bsize) {
bsize *= 2;
}
if (bsize > max_stage_size) {
wlr_log(WLR_INFO, "vulkan stage buffer has reached max size");
bsize = max_stage_size;
}
struct wlr_vk_stage_buffer *new_buf = stage_buffer_create(r, bsize);
if (new_buf == NULL) {
goto error;
}
wl_list_insert(r->stage.buffers.prev, &new_buf->link);
VkDeviceSize offset = vulkan_stage_buffer_alloc(new_buf, size, alignment);
assert(offset != (VkDeviceSize)-1);
*a = (struct wlr_vk_allocation){
.start = 0,
.size = size,
};
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.alloc = *a,
.buffer = new_buf,
.offset = offset,
.size = size,
};
error:
shared_buffer_destroy(r, buf);
error_alloc:
return (struct wlr_vk_buffer_span) {
.buffer = NULL,
.alloc = (struct wlr_vk_allocation) {0, 0},
.offset = 0,
.size = 0,
};
}
void vulkan_stage_mark_submit(struct wlr_vk_renderer *renderer,
uint64_t timeline_point) {
struct wlr_vk_stage_buffer *buf;
wl_list_for_each(buf, &renderer->stage.buffers, link) {
if (buf->head == buf->tail) {
continue;
}
struct wlr_vk_stage_watermark *mark = wl_array_add(
&buf->watermarks, sizeof(*mark));
if (mark == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
continue;
}
*mark = (struct wlr_vk_stage_watermark){
.head = buf->head,
.timeline_point = timeline_point,
};
}
}
static void stage_buffer_gc(struct wlr_vk_renderer *renderer, uint64_t current_point) {
struct wlr_vk_stage_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &renderer->stage.buffers, link) {
if (buf->head != buf->tail) {
buf->empty_gc_cnt = 0;
vulkan_stage_buffer_reclaim(buf, current_point);
continue;
}
if (buf->buf_size <= min_stage_size) {
// We will not deallocate the first buffer
continue;
}
buf->empty_gc_cnt++;
if (buf->empty_gc_cnt >= 1000) {
// This buffer hasn't been used for a while, so let's deallocate it
stage_buffer_destroy(renderer, buf);
}
}
}
VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer) {
if (renderer->stage.cb == NULL) {
renderer->stage.cb = vulkan_acquire_command_buffer(renderer);
@ -379,7 +438,52 @@ VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer) {
return renderer->stage.cb->vk;
}
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer) {
VkSemaphore vulkan_command_buffer_wait_sync_file(struct wlr_vk_renderer *renderer,
struct wlr_vk_command_buffer *render_cb, size_t sem_index, int sync_file_fd) {
VkResult res;
VkSemaphore *wait_semaphores = render_cb->wait_semaphores.data;
size_t wait_semaphores_len = render_cb->wait_semaphores.size / sizeof(wait_semaphores[0]);
VkSemaphore *sem_ptr;
if (sem_index >= wait_semaphores_len) {
sem_ptr = wl_array_add(&render_cb->wait_semaphores, sizeof(*sem_ptr));
if (sem_ptr == NULL) {
return VK_NULL_HANDLE;
}
*sem_ptr = VK_NULL_HANDLE;
} else {
sem_ptr = &wait_semaphores[sem_index];
}
if (*sem_ptr == VK_NULL_HANDLE) {
VkSemaphoreCreateInfo semaphore_info = {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
};
res = vkCreateSemaphore(renderer->dev->dev, &semaphore_info, NULL, sem_ptr);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSemaphore", res);
return VK_NULL_HANDLE;
}
}
VkImportSemaphoreFdInfoKHR import_info = {
.sType = VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_FD_INFO_KHR,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
.flags = VK_SEMAPHORE_IMPORT_TEMPORARY_BIT,
.semaphore = *sem_ptr,
.fd = sync_file_fd,
};
res = renderer->dev->api.vkImportSemaphoreFdKHR(renderer->dev->dev, &import_info);
if (res != VK_SUCCESS) {
wlr_vk_error("vkImportSemaphoreFdKHR", res);
return VK_NULL_HANDLE;
}
return *sem_ptr;
}
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_file_fd) {
if (renderer->stage.cb == NULL) {
return false;
}
@ -392,6 +496,8 @@ bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer) {
return false;
}
VkSemaphore wait_semaphore;
VkPipelineStageFlags wait_stage = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
VkTimelineSemaphoreSubmitInfoKHR timeline_submit_info = {
.sType = VK_STRUCTURE_TYPE_TIMELINE_SEMAPHORE_SUBMIT_INFO_KHR,
.signalSemaphoreValueCount = 1,
@ -405,16 +511,32 @@ bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer) {
.signalSemaphoreCount = 1,
.pSignalSemaphores = &renderer->timeline_semaphore,
};
if (wait_sync_file_fd != -1) {
wait_semaphore = vulkan_command_buffer_wait_sync_file(renderer, cb, 0, wait_sync_file_fd);
if (wait_semaphore == VK_NULL_HANDLE) {
return false;
}
submit_info.waitSemaphoreCount = 1;
submit_info.pWaitSemaphores = &wait_semaphore;
submit_info.pWaitDstStageMask = &wait_stage;
}
vulkan_stage_mark_submit(renderer, timeline_point);
VkResult res = vkQueueSubmit(renderer->dev->queue, 1, &submit_info, VK_NULL_HANDLE);
if (res != VK_SUCCESS) {
wlr_vk_error("vkQueueSubmit", res);
return false;
}
// NOTE: don't release stage allocations here since they may still be
// used for reading. Will be done next frame.
if (!vulkan_wait_command_buffer(cb, renderer)) {
return false;
}
return vulkan_wait_command_buffer(cb, renderer);
// We did a blocking wait so this is now the current point
stage_buffer_gc(renderer, timeline_point);
return true;
}
struct wlr_vk_format_props *vulkan_format_props_from_drm(
@ -448,7 +570,6 @@ static bool init_command_buffer(struct wlr_vk_command_buffer *cb,
.vk = vk_cb,
};
wl_list_init(&cb->destroy_textures);
wl_list_init(&cb->stage_buffers);
return true;
}
@ -474,7 +595,7 @@ bool vulkan_wait_command_buffer(struct wlr_vk_command_buffer *cb,
}
static void release_command_buffer_resources(struct wlr_vk_command_buffer *cb,
struct wlr_vk_renderer *renderer, int64_t now) {
struct wlr_vk_renderer *renderer) {
struct wlr_vk_texture *texture, *texture_tmp;
wl_list_for_each_safe(texture, texture_tmp, &cb->destroy_textures, destroy_link) {
wl_list_remove(&texture->destroy_link);
@ -482,15 +603,6 @@ static void release_command_buffer_resources(struct wlr_vk_command_buffer *cb,
wlr_texture_destroy(&texture->wlr_texture);
}
struct wlr_vk_shared_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &cb->stage_buffers, link) {
buf->allocs.size = 0;
buf->last_used_ms = now;
wl_list_remove(&buf->link);
wl_list_insert(&renderer->stage.buffers, &buf->link);
}
if (cb->color_transform) {
wlr_color_transform_unref(cb->color_transform);
cb->color_transform = NULL;
@ -509,22 +621,14 @@ static struct wlr_vk_command_buffer *get_command_buffer(
return NULL;
}
// Garbage collect any buffers that have remained unused for too long
int64_t now = get_current_time_msec();
struct wlr_vk_shared_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &renderer->stage.buffers, link) {
if (buf->allocs.size == 0 && buf->last_used_ms + 10000 < now) {
shared_buffer_destroy(renderer, buf);
}
}
stage_buffer_gc(renderer, current_point);
// Destroy textures for completed command buffers
for (size_t i = 0; i < VULKAN_COMMAND_BUFFERS_CAP; i++) {
struct wlr_vk_command_buffer *cb = &renderer->command_buffers[i];
if (cb->vk != VK_NULL_HANDLE && !cb->recording &&
cb->timeline_point <= current_point) {
release_command_buffer_resources(cb, renderer, now);
release_command_buffer_resources(cb, renderer);
}
}
@ -1127,7 +1231,7 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
if (cb->vk == VK_NULL_HANDLE) {
continue;
}
release_command_buffer_resources(cb, renderer, 0);
release_command_buffer_resources(cb, renderer);
if (cb->binary_semaphore != VK_NULL_HANDLE) {
vkDestroySemaphore(renderer->dev->dev, cb->binary_semaphore, NULL);
}
@ -1139,9 +1243,9 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
}
// stage.cb automatically freed with command pool
struct wlr_vk_shared_buffer *buf, *tmp_buf;
struct wlr_vk_stage_buffer *buf, *tmp_buf;
wl_list_for_each_safe(buf, tmp_buf, &renderer->stage.buffers, link) {
shared_buffer_destroy(renderer, buf);
stage_buffer_destroy(renderer, buf);
}
struct wlr_vk_texture *tex, *tex_tmp;
@ -1188,7 +1292,7 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
vkDestroyPipelineLayout(dev->dev, pipeline_layout->vk, NULL);
vkDestroyDescriptorSetLayout(dev->dev, pipeline_layout->ds, NULL);
vkDestroySampler(dev->dev, pipeline_layout->sampler, NULL);
vkDestroySamplerYcbcrConversion(dev->dev, pipeline_layout->ycbcr.conversion, NULL);
renderer->dev->api.vkDestroySamplerYcbcrConversionKHR(dev->dev, pipeline_layout->ycbcr.conversion, NULL);
free(pipeline_layout);
}
@ -1218,7 +1322,8 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VkFormat src_format, VkImage src_image,
uint32_t drm_format, uint32_t stride,
uint32_t width, uint32_t height, uint32_t src_x, uint32_t src_y,
uint32_t dst_x, uint32_t dst_y, void *data) {
uint32_t dst_x, uint32_t dst_y, void *data,
struct wlr_drm_syncobj_timeline *wait_timeline, uint64_t wait_point) {
VkDevice dev = vk_renderer->dev->dev;
const struct wlr_pixel_format_info *pixel_format_info = drm_get_pixel_format_info(drm_format);
@ -1404,7 +1509,17 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_ACCESS_MEMORY_READ_BIT);
if (!vulkan_submit_stage_wait(vk_renderer)) {
int wait_sync_file_fd = -1;
if (wait_timeline != NULL) {
wait_sync_file_fd = wlr_drm_syncobj_timeline_export_sync_file(wait_timeline, wait_point);
if (wait_sync_file_fd < 0) {
wlr_log(WLR_ERROR, "Failed to export wait timeline point as sync_file");
return false;
}
}
if (!vulkan_submit_stage_wait(vk_renderer, wait_sync_file_fd)) {
close(wait_sync_file_fd);
return false;
}
@ -1485,6 +1600,80 @@ static struct wlr_render_pass *vulkan_begin_buffer_pass(struct wlr_renderer *wlr
return &render_pass->base;
}
static const struct wlr_render_timer_impl render_timer_impl;
static struct wlr_render_timer *vulkan_render_timer_create(
struct wlr_renderer *wlr_renderer) {
struct wlr_vk_renderer *renderer = vulkan_get_renderer(wlr_renderer);
if (renderer->dev->timestamp_valid_bits == 0) {
wlr_log(WLR_ERROR, "Failed to create render timer: "
"timestamp queries not supported by queue family");
return NULL;
}
struct wlr_vk_render_timer *timer = calloc(1, sizeof(*timer));
if (!timer) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
return NULL;
}
VkQueryPoolCreateInfo pool_info = {
.sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO,
.queryType = VK_QUERY_TYPE_TIMESTAMP,
.queryCount = 2,
};
VkResult res = vkCreateQueryPool(renderer->dev->dev, &pool_info,
NULL, &timer->query_pool);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateQueryPool", res);
free(timer);
return NULL;
}
timer->base.impl = &render_timer_impl;
timer->renderer = renderer;
return &timer->base;
}
static int vulkan_render_timer_get_duration_ns(
struct wlr_render_timer *wlr_timer) {
struct wlr_vk_render_timer *timer =
wl_container_of(wlr_timer, timer, base);
struct wlr_vk_renderer *renderer = timer->renderer;
// Layout: [ timestamp1, avail1, timestamp2, avail2 ]
uint64_t data[4] = {0};
VkResult res = vkGetQueryPoolResults(renderer->dev->dev,
timer->query_pool, 0, 2, sizeof(data), data,
2 * sizeof(uint64_t),
VK_QUERY_RESULT_64_BIT | VK_QUERY_RESULT_WITH_AVAILABILITY_BIT);
if (res == VK_NOT_READY || data[1] == 0 || data[3] == 0) {
wlr_log(WLR_ERROR, "Failed to get render duration: "
"timestamp query results not yet ready");
return -1;
}
if (res != VK_SUCCESS) {
wlr_vk_error("vkGetQueryPoolResults", res);
return -1;
}
uint64_t ticks = data[2] - data[0];
return (int)(ticks * renderer->dev->timestamp_period);
}
static void vulkan_render_timer_destroy(
struct wlr_render_timer *wlr_timer) {
struct wlr_vk_render_timer *timer =
wl_container_of(wlr_timer, timer, base);
vkDestroyQueryPool(timer->renderer->dev->dev, timer->query_pool, NULL);
free(timer);
}
static const struct wlr_render_timer_impl render_timer_impl = {
.get_duration_ns = vulkan_render_timer_get_duration_ns,
.destroy = vulkan_render_timer_destroy,
};
static const struct wlr_renderer_impl renderer_impl = {
.get_texture_formats = vulkan_get_texture_formats,
.get_render_formats = vulkan_get_render_formats,
@ -1492,6 +1681,7 @@ static const struct wlr_renderer_impl renderer_impl = {
.get_drm_fd = vulkan_get_drm_fd,
.texture_from_buffer = vulkan_texture_from_buffer,
.begin_buffer_pass = vulkan_begin_buffer_pass,
.render_timer_create = vulkan_render_timer_create,
};
// Initializes the VkDescriptorSetLayout and VkPipelineLayout needed
@ -1692,6 +1882,25 @@ static bool pipeline_key_equals(const struct wlr_vk_pipeline_key *a,
return true;
}
static const VkVertexInputBindingDescription instance_vert_binding = {
.binding = 0,
.stride = sizeof(float) * 4,
.inputRate = VK_VERTEX_INPUT_RATE_INSTANCE,
};
static const VkVertexInputAttributeDescription instance_vert_attr = {
.location = 0,
.binding = 0,
.format = VK_FORMAT_R32G32B32A32_SFLOAT,
.offset = 0,
};
static const VkPipelineVertexInputStateCreateInfo instance_vert_input = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
.vertexBindingDescriptionCount = 1,
.pVertexBindingDescriptions = &instance_vert_binding,
.vertexAttributeDescriptionCount = 1,
.pVertexAttributeDescriptions = &instance_vert_attr,
};
// Initializes the pipeline for rendering textures and using the given
// VkRenderPass and VkPipelineLayout.
struct wlr_vk_pipeline *setup_get_or_create_pipeline(
@ -1823,10 +2032,6 @@ struct wlr_vk_pipeline *setup_get_or_create_pipeline(
.dynamicStateCount = sizeof(dyn_states) / sizeof(dyn_states[0]),
};
VkPipelineVertexInputStateCreateInfo vertex = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
};
VkGraphicsPipelineCreateInfo pinfo = {
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.layout = pipeline_layout->vk,
@ -1841,7 +2046,7 @@ struct wlr_vk_pipeline *setup_get_or_create_pipeline(
.pMultisampleState = &multisample,
.pViewportState = &viewport,
.pDynamicState = &dynamic,
.pVertexInputState = &vertex,
.pVertexInputState = &instance_vert_input,
};
VkPipelineCache cache = VK_NULL_HANDLE;
@ -1940,10 +2145,6 @@ static bool init_blend_to_output_pipeline(struct wlr_vk_renderer *renderer,
.dynamicStateCount = sizeof(dyn_states) / sizeof(dyn_states[0]),
};
VkPipelineVertexInputStateCreateInfo vertex = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
};
VkGraphicsPipelineCreateInfo pinfo = {
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.pNext = NULL,
@ -1958,7 +2159,7 @@ static bool init_blend_to_output_pipeline(struct wlr_vk_renderer *renderer,
.pMultisampleState = &multisample,
.pViewportState = &viewport,
.pDynamicState = &dynamic,
.pVertexInputState = &vertex,
.pVertexInputState = &instance_vert_input,
};
VkPipelineCache cache = VK_NULL_HANDLE;
@ -2049,10 +2250,10 @@ struct wlr_vk_pipeline_layout *get_or_create_pipeline_layout(
.yChromaOffset = VK_CHROMA_LOCATION_MIDPOINT,
.chromaFilter = VK_FILTER_LINEAR,
};
res = vkCreateSamplerYcbcrConversion(renderer->dev->dev,
res = renderer->dev->api.vkCreateSamplerYcbcrConversionKHR(renderer->dev->dev,
&conversion_create_info, NULL, &pipeline_layout->ycbcr.conversion);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSamplerYcbcrConversion", res);
wlr_vk_error("vkCreateSamplerYcbcrConversionKHR", res);
free(pipeline_layout);
return NULL;
}

View file

@ -8,11 +8,14 @@ layout(push_constant, row_major) uniform UBO {
vec2 uv_size;
} data;
layout(location = 0) in vec4 inst_rect;
layout(location = 0) out vec2 uv;
void main() {
vec2 pos = vec2(float((gl_VertexIndex + 1) & 2) * 0.5f,
float(gl_VertexIndex & 2) * 0.5f);
pos = inst_rect.xy + pos * inst_rect.zw;
uv = data.uv_offset + pos * data.uv_size;
gl_Position = data.proj * vec4(pos, 0.0, 1.0);
}

View file

@ -15,7 +15,7 @@
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_vk(struct wlr_texture *wlr_texture) {
bool wlr_texture_is_vk(const struct wlr_texture *wlr_texture) {
return wlr_texture->impl == &texture_impl;
}
@ -72,16 +72,16 @@ static bool write_pixels(struct wlr_vk_texture *texture,
// get staging buffer
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer, bsize, format_info->bytes_per_block);
if (!span.buffer || span.alloc.size != bsize) {
if (!span.buffer || span.size != bsize) {
wlr_log(WLR_ERROR, "Failed to retrieve staging buffer");
free(copies);
return false;
}
char *map = (char*)span.buffer->cpu_mapping + span.alloc.start;
char *map = (char*)span.buffer->cpu_mapping + span.offset;
// upload data
uint32_t buf_off = span.alloc.start;
uint32_t buf_off = span.offset;
for (int i = 0; i < rects_len; i++) {
pixman_box32_t rect = rects[i];
uint32_t width = rect.x2 - rect.x1;
@ -238,7 +238,8 @@ static bool vulkan_texture_read_pixels(struct wlr_texture *wlr_texture,
void *p = wlr_texture_read_pixel_options_get_data(options);
return vulkan_read_pixels(texture->renderer, texture->format->vk, texture->image,
options->format, options->stride, src.width, src.height, src.x, src.y, 0, 0, p);
options->format, options->stride, src.width, src.height, src.x, src.y, 0, 0, p,
options->wait_timeline, options->wait_point);
}
static uint32_t vulkan_texture_preferred_read_format(struct wlr_texture *wlr_texture) {
@ -329,6 +330,7 @@ struct wlr_vk_texture_view *vulkan_texture_get_or_create_view(struct wlr_vk_text
view->ds_pool = vulkan_alloc_texture_ds(texture->renderer, pipeline_layout->ds, &view->ds);
if (!view->ds_pool) {
vkDestroyImageView(dev, view->image_view, NULL);
free(view);
wlr_log(WLR_ERROR, "failed to allocate descriptor");
return NULL;
@ -653,7 +655,7 @@ VkImage vulkan_import_dmabuf(struct wlr_vk_renderer *renderer,
.sType = VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2,
};
vkGetImageMemoryRequirements2(dev, &memri, &memr);
renderer->dev->api.vkGetImageMemoryRequirements2KHR(dev, &memri, &memr);
int mem = vulkan_find_mem_type(renderer->dev, 0,
memr.memoryRequirements.memoryTypeBits & fdp.memoryTypeBits);
if (mem < 0) {
@ -712,7 +714,7 @@ VkImage vulkan_import_dmabuf(struct wlr_vk_renderer *renderer,
}
}
res = vkBindImageMemory2(dev, mem_count, bindi);
res = renderer->dev->api.vkBindImageMemory2KHR(dev, mem_count, bindi);
if (res != VK_SUCCESS) {
wlr_vk_error("vkBindMemory failed", res);
goto error_image;

View file

@ -44,7 +44,7 @@ const char *vulkan_strerror(VkResult err) {
ERR_STR(ERROR_INVALID_EXTERNAL_HANDLE);
ERR_STR(ERROR_FRAGMENTATION);
ERR_STR(ERROR_INVALID_OPAQUE_CAPTURE_ADDRESS);
ERR_STR(PIPELINE_COMPILE_REQUIRED);
ERR_STR(PIPELINE_COMPILE_REQUIRED_EXT);
ERR_STR(ERROR_SURFACE_LOST_KHR);
ERR_STR(ERROR_NATIVE_WINDOW_IN_USE_KHR);
ERR_STR(SUBOPTIMAL_KHR);

View file

@ -81,21 +81,6 @@ static VKAPI_ATTR VkBool32 debug_callback(VkDebugUtilsMessageSeverityFlagBitsEXT
}
struct wlr_vk_instance *vulkan_instance_create(bool debug) {
// we require vulkan 1.1
PFN_vkEnumerateInstanceVersion pfEnumInstanceVersion =
(PFN_vkEnumerateInstanceVersion)
vkGetInstanceProcAddr(VK_NULL_HANDLE, "vkEnumerateInstanceVersion");
if (!pfEnumInstanceVersion) {
wlr_log(WLR_ERROR, "wlroots requires vulkan 1.1 which is not available");
return NULL;
}
uint32_t ini_version;
if (pfEnumInstanceVersion(&ini_version) != VK_SUCCESS ||
ini_version < VK_API_VERSION_1_1) {
wlr_log(WLR_ERROR, "wlroots requires vulkan 1.1 which is not available");
return NULL;
}
uint32_t avail_extc = 0;
VkResult res;
@ -125,7 +110,18 @@ struct wlr_vk_instance *vulkan_instance_create(bool debug) {
}
size_t extensions_len = 0;
const char *extensions[1] = {0};
const char *extensions[8] = {0};
extensions[extensions_len++] = VK_KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_EXTERNAL_SEMAPHORE_CAPABILITIES_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_EXTERNAL_MEMORY_CAPABILITIES_EXTENSION_NAME;
for (size_t i = 0; i < extensions_len; i++) {
if (!check_extension(avail_ext_props, avail_extc, extensions[i])) {
wlr_log(WLR_ERROR, "vulkan: required instance extension %s not found",
extensions[i]);
goto error;
}
}
bool debug_utils_found = false;
if (debug && check_extension(avail_ext_props, avail_extc,
@ -140,7 +136,7 @@ struct wlr_vk_instance *vulkan_instance_create(bool debug) {
.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pEngineName = "wlroots",
.engineVersion = WLR_VERSION_NUM,
.apiVersion = VK_API_VERSION_1_1,
.apiVersion = VK_API_VERSION_1_0,
};
VkInstanceCreateInfo instance_info = {
@ -282,20 +278,10 @@ VkPhysicalDevice vulkan_find_drm_phdev(struct wlr_vk_instance *ini, int drm_fd)
for (uint32_t i = 0; i < num_phdevs; ++i) {
VkPhysicalDevice phdev = phdevs[i];
// check whether device supports vulkan 1.1, needed for
// vkGetPhysicalDeviceProperties2
VkPhysicalDeviceProperties phdev_props;
vkGetPhysicalDeviceProperties(phdev, &phdev_props);
log_phdev(&phdev_props);
if (phdev_props.apiVersion < VK_API_VERSION_1_1) {
// NOTE: we could additionally check whether the
// VkPhysicalDeviceProperties2KHR extension is supported but
// implementations not supporting 1.1 are unlikely in future
continue;
}
// check for extensions
uint32_t avail_extc = 0;
res = vkEnumerateDeviceExtensionProperties(phdev, NULL,
@ -474,6 +460,12 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
extensions[extensions_len++] = VK_EXT_IMAGE_DRM_FORMAT_MODIFIER_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_TIMELINE_SEMAPHORE_EXTENSION_NAME; // or vulkan 1.2
extensions[extensions_len++] = VK_KHR_SYNCHRONIZATION_2_EXTENSION_NAME; // or vulkan 1.3
extensions[extensions_len++] = VK_KHR_EXTERNAL_MEMORY_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_BIND_MEMORY_2_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_SAMPLER_YCBCR_CONVERSION_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_EXTERNAL_SEMAPHORE_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_MAINTENANCE_1_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_GET_MEMORY_REQUIREMENTS_2_EXTENSION_NAME; // or vulkan 1.1
for (size_t i = 0; i < extensions_len; i++) {
if (!check_extension(avail_ext_props, avail_extc, extensions[i])) {
@ -496,10 +488,15 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
graphics_found = queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT;
if (graphics_found) {
dev->queue_family = i;
dev->timestamp_valid_bits = queue_props[i].timestampValidBits;
break;
}
}
assert(graphics_found);
VkPhysicalDeviceProperties phdev_props;
vkGetPhysicalDeviceProperties(phdev, &phdev_props);
dev->timestamp_period = phdev_props.limits.timestampPeriod;
}
bool exportable_semaphore = false, importable_semaphore = false;
@ -564,17 +561,17 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
.pQueuePriorities = &prio,
};
VkDeviceQueueGlobalPriorityCreateInfoKHR global_priority;
VkDeviceQueueGlobalPriorityCreateInfoEXT global_priority;
bool has_global_priority = check_extension(avail_ext_props, avail_extc,
VK_KHR_GLOBAL_PRIORITY_EXTENSION_NAME);
VK_EXT_GLOBAL_PRIORITY_EXTENSION_NAME);
if (has_global_priority) {
// If global priorities are supported, request a high-priority context
global_priority = (VkDeviceQueueGlobalPriorityCreateInfoKHR){
.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_GLOBAL_PRIORITY_CREATE_INFO_KHR,
.globalPriority = VK_QUEUE_GLOBAL_PRIORITY_HIGH_KHR,
global_priority = (VkDeviceQueueGlobalPriorityCreateInfoEXT){
.sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_GLOBAL_PRIORITY_CREATE_INFO_EXT,
.globalPriority = VK_QUEUE_GLOBAL_PRIORITY_HIGH_EXT,
};
qinfo.pNext = &global_priority;
extensions[extensions_len++] = VK_KHR_GLOBAL_PRIORITY_EXTENSION_NAME;
extensions[extensions_len++] = VK_EXT_GLOBAL_PRIORITY_EXTENSION_NAME;
wlr_log(WLR_DEBUG, "Requesting a high-priority device queue");
} else {
wlr_log(WLR_DEBUG, "Global priorities are not supported, "
@ -630,6 +627,10 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
load_device_proc(dev, "vkGetSemaphoreCounterValueKHR",
&dev->api.vkGetSemaphoreCounterValueKHR);
load_device_proc(dev, "vkQueueSubmit2KHR", &dev->api.vkQueueSubmit2KHR);
load_device_proc(dev, "vkBindImageMemory2KHR", &dev->api.vkBindImageMemory2KHR);
load_device_proc(dev, "vkCreateSamplerYcbcrConversionKHR", &dev->api.vkCreateSamplerYcbcrConversionKHR);
load_device_proc(dev, "vkDestroySamplerYcbcrConversionKHR", &dev->api.vkDestroySamplerYcbcrConversionKHR);
load_device_proc(dev, "vkGetImageMemoryRequirements2KHR", &dev->api.vkGetImageMemoryRequirements2KHR);
if (has_external_semaphore_fd) {
load_device_proc(dev, "vkGetSemaphoreFdKHR", &dev->api.vkGetSemaphoreFdKHR);

154
test/bench_scene.c Normal file
View file

@ -0,0 +1,154 @@
#include <stdio.h>
#include <time.h>
#include <wlr/types/wlr_scene.h>
struct tree_spec {
// Parameters for the tree we'll construct
int depth;
int branching;
int rect_size;
int spread;
// Stats around the tree we built
int tree_count;
int rect_count;
int max_x;
int max_y;
};
static int max(int a, int b) {
return a > b ? a : b;
}
static double timespec_diff_msec(struct timespec *start, struct timespec *end) {
return (double)(end->tv_sec - start->tv_sec) * 1e3 +
(double)(end->tv_nsec - start->tv_nsec) / 1e6;
}
static bool build_tree(struct wlr_scene_tree *parent, struct tree_spec *spec,
int depth, int x, int y) {
if (depth == spec->depth) {
float color[4] = { 1.0f, 1.0f, 1.0f, 1.0f };
struct wlr_scene_rect *rect =
wlr_scene_rect_create(parent, spec->rect_size, spec->rect_size, color);
if (rect == NULL) {
fprintf(stderr, "wlr_scene_rect_create failed\n");
return false;
}
wlr_scene_node_set_position(&rect->node, x, y);
spec->max_x = max(spec->max_x, x + spec->rect_size);
spec->max_y = max(spec->max_y, y + spec->rect_size);
spec->rect_count++;
return true;
}
for (int i = 0; i < spec->branching; i++) {
struct wlr_scene_tree *child = wlr_scene_tree_create(parent);
if (child == NULL) {
fprintf(stderr, "wlr_scene_tree_create failed\n");
return false;
}
spec->tree_count++;
int offset = i * spec->spread;
wlr_scene_node_set_position(&child->node, offset, offset);
if (!build_tree(child, spec, depth + 1, x + offset, y + offset)) {
return false;
}
}
return true;
}
static bool bench_create_tree(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
if (!build_tree(&scene->tree, spec, 0, 0, 0)) {
fprintf(stderr, "build_tree failed\n");
return false;
}
clock_gettime(CLOCK_MONOTONIC, &end);
printf("Built tree with %d tree nodes, %d rect nodes\n\n",
spec->tree_count, spec->rect_count);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = spec->tree_count + spec->rect_count;
printf("create test tree: %d nodes, %.3f ms, %.0f nodes/ms\n",
nodes, elapsed, nodes / elapsed);
return true;
}
static void bench_scene_node_at(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
int iters = 10000;
int hits = 0;
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i = 0; i < iters; i++) {
// Spread lookups across the tree extent
double lx = (double)(i * 97 % spec->max_x);
double ly = (double)(i * 53 % spec->max_y);
double nx, ny;
struct wlr_scene_node *node =
wlr_scene_node_at(&scene->tree.node, lx, ly, &nx, &ny);
if (node != NULL) {
hits++;
}
}
clock_gettime(CLOCK_MONOTONIC, &end);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = (spec->tree_count + spec->rect_count) * iters;
printf("wlr_scene_node_at: %d iters, %.3f ms, %.0f nodes/ms (hits: %d/%d)\n",
iters, elapsed, nodes / elapsed, hits, iters);
}
static void noop_iterator(struct wlr_scene_buffer *buffer,
int sx, int sy, void *user_data) {
(void)buffer;
(void)sx;
(void)sy;
int *cnt = user_data;
(*cnt)++;
}
static void bench_scene_node_for_each_buffer(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
int iters = 10000;
int hits = 0;
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i = 0; i < iters; i++) {
wlr_scene_node_for_each_buffer(&scene->tree.node,
noop_iterator, &hits);
}
clock_gettime(CLOCK_MONOTONIC, &end);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = (spec->tree_count + spec->rect_count) * iters;
printf("wlr_scene_node_for_each_buffer: %d iters, %.3f ms, %.0f nodes/ms (hits: %d/%d)\n",
iters, elapsed, nodes / elapsed, hits, iters);
}
int main(void) {
struct wlr_scene *scene = wlr_scene_create();
if (scene == NULL) {
fprintf(stderr, "wlr_scene_create failed\n");
return 99;
}
struct tree_spec spec = {
.depth = 5,
.branching = 5,
.rect_size = 10,
.spread = 100,
};
if (!bench_create_tree(scene, &spec)) {
return 99;
}
bench_scene_node_at(scene, &spec);
bench_scene_node_for_each_buffer(scene, &spec);
wlr_scene_node_destroy(&scene->tree.node);
return 0;
}

32
test/meson.build Normal file
View file

@ -0,0 +1,32 @@
# Used to test internal symbols
lib_wlr_internal = static_library(
versioned_name + '-internal',
objects: lib_wlr.extract_all_objects(recursive: false),
dependencies: wlr_deps,
include_directories: [wlr_inc],
install: false,
)
test(
'box',
executable('test-box', 'test_box.c', dependencies: wlroots),
)
if features.get('vulkan-renderer')
test(
'vulkan_stage_buffer',
executable(
'test-vulkan-stage-buffer',
'test_vulkan_stage_buffer.c',
link_with: lib_wlr_internal,
dependencies: wlr_deps,
include_directories: wlr_inc,
),
)
endif
benchmark(
'scene',
executable('bench-scene', 'bench_scene.c', dependencies: wlroots),
timeout: 30,
)

149
test/test_box.c Normal file
View file

@ -0,0 +1,149 @@
#include <assert.h>
#include <stddef.h>
#include <wlr/util/box.h>
static void test_box_empty(void) {
// NULL is empty
assert(wlr_box_empty(NULL));
// Zero width/height
struct wlr_box box = { .x = 0, .y = 0, .width = 0, .height = 10 };
assert(wlr_box_empty(&box));
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = 0 };
assert(wlr_box_empty(&box));
// Negative width/height
box = (struct wlr_box){ .x = 0, .y = 0, .width = -1, .height = 10 };
assert(wlr_box_empty(&box));
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = -1 };
assert(wlr_box_empty(&box));
// Valid box
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = 10 };
assert(!wlr_box_empty(&box));
}
static void test_box_intersection(void) {
struct wlr_box dest;
// Overlapping
struct wlr_box a = { .x = 0, .y = 0, .width = 100, .height = 100 };
struct wlr_box b = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(wlr_box_intersection(&dest, &a, &b));
assert(dest.x == 50 && dest.y == 50 &&
dest.width == 50 && dest.height == 50);
// Non-overlapping
b = (struct wlr_box){ .x = 200, .y = 200, .width = 50, .height = 50 };
assert(!wlr_box_intersection(&dest, &a, &b));
assert(dest.width == 0 && dest.height == 0);
// Touching edges
b = (struct wlr_box){ .x = 100, .y = 0, .width = 50, .height = 50 };
assert(!wlr_box_intersection(&dest, &a, &b));
// Self-intersection
assert(wlr_box_intersection(&dest, &a, &a));
assert(dest.x == a.x && dest.y == a.y &&
dest.width == a.width && dest.height == a.height);
// Empty input
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_intersection(&dest, &a, &empty));
// NULL input
assert(!wlr_box_intersection(&dest, &a, NULL));
assert(!wlr_box_intersection(&dest, NULL, &a));
}
static void test_box_intersects_box(void) {
// Overlapping
struct wlr_box a = { .x = 0, .y = 0, .width = 100, .height = 100 };
struct wlr_box b = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(wlr_box_intersects(&a, &b));
// Non-overlapping
b = (struct wlr_box){ .x = 200, .y = 200, .width = 50, .height = 50 };
assert(!wlr_box_intersects(&a, &b));
// Touching edges
b = (struct wlr_box){ .x = 100, .y = 0, .width = 50, .height = 50 };
assert(!wlr_box_intersects(&a, &b));
// Self-intersection
assert(wlr_box_intersects(&a, &a));
// Empty input
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_intersects(&a, &empty));
// NULL input
assert(!wlr_box_intersects(&a, NULL));
assert(!wlr_box_intersects(NULL, &a));
}
static void test_box_contains_point(void) {
struct wlr_box box = { .x = 10, .y = 20, .width = 100, .height = 50 };
// Interior point
assert(wlr_box_contains_point(&box, 50, 40));
// Inclusive lower bound
assert(wlr_box_contains_point(&box, 10, 20));
// Exclusive upper bound
assert(!wlr_box_contains_point(&box, 110, 70));
assert(!wlr_box_contains_point(&box, 110, 40));
assert(!wlr_box_contains_point(&box, 50, 70));
// Outside
assert(!wlr_box_contains_point(&box, 5, 40));
assert(!wlr_box_contains_point(&box, 50, 15));
// Empty box
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_contains_point(&empty, 0, 0));
// NULL
assert(!wlr_box_contains_point(NULL, 0, 0));
}
static void test_box_contains_box(void) {
struct wlr_box outer = { .x = 0, .y = 0, .width = 100, .height = 100 };
// Fully contained
struct wlr_box inner = { .x = 10, .y = 10, .width = 50, .height = 50 };
assert(wlr_box_contains_box(&outer, &inner));
// Self-containment
assert(wlr_box_contains_box(&outer, &outer));
// Partial overlap — not contained
struct wlr_box partial = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(!wlr_box_contains_box(&outer, &partial));
// Empty inner
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_contains_box(&outer, &empty));
// Empty outer
assert(!wlr_box_contains_box(&empty, &inner));
// NULL
assert(!wlr_box_contains_box(&outer, NULL));
assert(!wlr_box_contains_box(NULL, &outer));
}
int main(void) {
#ifdef NDEBUG
fprintf(stderr, "NDEBUG must be disabled for tests\n");
return 1;
#endif
test_box_empty();
test_box_intersection();
test_box_intersects_box();
test_box_contains_point();
test_box_contains_box();
return 0;
}

View file

@ -0,0 +1,200 @@
#include <assert.h>
#include <stdio.h>
#include <wayland-util.h>
#include "render/vulkan.h"
#define BUF_SIZE 1024
#define ALLOC_FAIL ((VkDeviceSize)-1)
static void stage_buffer_init(struct wlr_vk_stage_buffer *buf) {
*buf = (struct wlr_vk_stage_buffer){
.buf_size = BUF_SIZE,
};
wl_array_init(&buf->watermarks);
}
static void stage_buffer_finish(struct wlr_vk_stage_buffer *buf) {
wl_array_release(&buf->watermarks);
}
static void push_watermark(struct wlr_vk_stage_buffer *buf,
uint64_t timeline_point) {
struct wlr_vk_stage_watermark *mark = wl_array_add(
&buf->watermarks, sizeof(*mark));
assert(mark != NULL);
*mark = (struct wlr_vk_stage_watermark){
.head = buf->head,
.timeline_point = timeline_point,
};
}
static size_t watermark_count(const struct wlr_vk_stage_buffer *buf) {
return buf->watermarks.size / sizeof(struct wlr_vk_stage_watermark);
}
static void test_alloc_simple(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
assert(buf.head == 100);
assert(vulkan_stage_buffer_alloc(&buf, 200, 1) == 100);
assert(buf.head == 300);
assert(buf.tail == 0);
stage_buffer_finish(&buf);
}
static void test_alloc_alignment(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 7, 1) == 0);
assert(buf.head == 7);
assert(vulkan_stage_buffer_alloc(&buf, 4, 16) == 16);
assert(buf.head == 20);
assert(vulkan_stage_buffer_alloc(&buf, 8, 8) == 24);
assert(buf.head == 32);
stage_buffer_finish(&buf);
}
static void test_alloc_limit(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// We do not allow allocations that would cause head to equal tail
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE, 1) == ALLOC_FAIL);
assert(buf.head == 0);
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE-1, 1) == 0);
assert(buf.head == BUF_SIZE-1);
stage_buffer_finish(&buf);
}
static void test_alloc_wrap(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// Fill the first 924 bytes
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE - 100, 1) == 0);
push_watermark(&buf, 1);
// Fill the end of the buffer
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == 924);
push_watermark(&buf, 2);
// First, check that we don't wrap prematurely
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == ALLOC_FAIL);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == ALLOC_FAIL);
// Free the beginning of the buffer and try to wrap again
vulkan_stage_buffer_reclaim(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == 0);
assert(buf.tail == 924);
assert(buf.head == 50);
// Check that freeing from the end of the buffer still works
vulkan_stage_buffer_reclaim(&buf, 2);
assert(buf.tail == 974);
assert(buf.head == 50);
// Check that allocations still work
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 50);
assert(buf.tail == 974);
assert(buf.head == 150);
stage_buffer_finish(&buf);
}
static void test_reclaim_empty(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// Fresh buffer with no watermarks and head == tail == 0 is drained.
vulkan_stage_buffer_reclaim(&buf, 0);
assert(buf.head == buf.tail);
assert(buf.tail == 0);
stage_buffer_finish(&buf);
}
static void test_reclaim_pending_not_completed(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
// current point hasn't reached the watermark yet.
vulkan_stage_buffer_reclaim(&buf, 0);
assert(buf.head != buf.tail);
assert(buf.tail == 0);
assert(watermark_count(&buf) == 1);
stage_buffer_finish(&buf);
}
static void test_reclaim_partial(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 100);
push_watermark(&buf, 2);
// Only the first watermark is reached.
vulkan_stage_buffer_reclaim(&buf, 1);
assert(buf.head != buf.tail);
assert(buf.tail == 100);
assert(watermark_count(&buf) == 1);
const struct wlr_vk_stage_watermark *remaining = buf.watermarks.data;
assert(remaining[0].head == 200);
assert(remaining[0].timeline_point == 2);
stage_buffer_finish(&buf);
}
static void test_reclaim_all(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 100);
push_watermark(&buf, 2);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 200);
push_watermark(&buf, 3);
vulkan_stage_buffer_reclaim(&buf, 100);
assert(buf.head == buf.tail);
assert(buf.tail == 300);
assert(watermark_count(&buf) == 0);
stage_buffer_finish(&buf);
}
int main(void) {
#ifdef NDEBUG
fprintf(stderr, "NDEBUG must be disabled for tests\n");
return 1;
#endif
test_alloc_simple();
test_alloc_alignment();
test_alloc_limit();
test_alloc_wrap();
test_reclaim_empty();
test_reclaim_pending_not_completed();
test_reclaim_partial();
test_reclaim_all();
return 0;
}

View file

@ -1,6 +1,6 @@
PKG_CONFIG?=pkg-config
PKGS="wlroots-0.20" wayland-server xkbcommon
PKGS="wlroots-0.21" wayland-server xkbcommon
CFLAGS_PKG_CONFIG!=$(PKG_CONFIG) --cflags $(PKGS)
CFLAGS+=$(CFLAGS_PKG_CONFIG)
LIBS!=$(PKG_CONFIG) --libs $(PKGS)

View file

@ -17,7 +17,7 @@ foreign_toplevel_manager_from_resource(struct wl_resource *resource) {
}
bool wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_accept(
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request,
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event *request,
struct wlr_ext_image_capture_source_v1 *source) {
return wlr_ext_image_capture_source_v1_create_resource(source, request->client, request->new_id);
}
@ -34,18 +34,13 @@ static void foreign_toplevel_manager_handle_create_source(struct wl_client *clie
return;
}
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request =
calloc(1, sizeof(*request));
if (request == NULL) {
wl_resource_post_no_memory(manager_resource);
return;
}
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event request = {
.toplevel_handle = toplevel_handle,
.client = client,
.new_id = new_id,
};
request->toplevel_handle = toplevel_handle;
request->client = client;
request->new_id = new_id;
wl_signal_emit_mutable(&manager->events.new_request, request);
wl_signal_emit_mutable(&manager->events.capture_request, &request);
}
static void foreign_toplevel_manager_handle_destroy(struct wl_client *client,
@ -76,7 +71,7 @@ static void foreign_toplevel_manager_handle_display_destroy(struct wl_listener *
wl_container_of(listener, manager, display_destroy);
wl_signal_emit_mutable(&manager->events.destroy, NULL);
assert(wl_list_empty(&manager->events.destroy.listener_list));
assert(wl_list_empty(&manager->events.new_request.listener_list));
assert(wl_list_empty(&manager->events.capture_request.listener_list));
wl_list_remove(&manager->display_destroy.link);
wl_global_destroy(manager->global);
free(manager);
@ -102,7 +97,7 @@ wlr_ext_foreign_toplevel_image_capture_source_manager_v1_create(struct wl_displa
}
wl_signal_init(&manager->events.destroy);
wl_signal_init(&manager->events.new_request);
wl_signal_init(&manager->events.capture_request);
manager->display_destroy.notify = foreign_toplevel_manager_handle_display_destroy;
wl_display_add_destroy_listener(display, &manager->display_destroy);

View file

@ -110,6 +110,10 @@ static const struct wlr_ext_image_capture_source_v1_interface output_source_impl
static void source_update_buffer_constraints(struct wlr_ext_output_image_capture_source_v1 *source) {
struct wlr_output *output = source->output;
if (!output->enabled) {
return;
}
if (!wlr_output_configure_primary_swapchain(output, NULL, &output->swapchain)) {
return;
}
@ -123,7 +127,8 @@ static void source_handle_output_commit(struct wl_listener *listener,
struct wlr_ext_output_image_capture_source_v1 *source = wl_container_of(listener, source, output_commit);
struct wlr_output_event_commit *event = data;
if (event->state->committed & (WLR_OUTPUT_STATE_MODE | WLR_OUTPUT_STATE_RENDER_FORMAT)) {
if (event->state->committed & (WLR_OUTPUT_STATE_MODE |
WLR_OUTPUT_STATE_RENDER_FORMAT | WLR_OUTPUT_STATE_ENABLED)) {
source_update_buffer_constraints(source);
}

View file

@ -34,13 +34,14 @@ struct scene_node_source_frame_event {
static size_t last_output_num = 0;
static void _get_scene_node_extents(struct wlr_scene_node *node, struct wlr_box *box, int lx, int ly) {
static void _get_scene_node_extents(struct wlr_scene_node *node, int lx, int ly,
int *x_min, int *y_min, int *x_max, int *y_max) {
switch (node->type) {
case WLR_SCENE_NODE_TREE:;
struct wlr_scene_tree *scene_tree = wlr_scene_tree_from_node(node);
struct wlr_scene_node *child;
wl_list_for_each(child, &scene_tree->children, link) {
_get_scene_node_extents(child, box, lx + child->x, ly + child->y);
_get_scene_node_extents(child, lx + child->x, ly + child->y, x_min, y_min, x_max, y_max);
}
break;
case WLR_SCENE_NODE_RECT:
@ -48,27 +49,30 @@ static void _get_scene_node_extents(struct wlr_scene_node *node, struct wlr_box
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
if (node_box.x < box->x) {
box->x = node_box.x;
if (node_box.x < *x_min) {
*x_min = node_box.x;
}
if (node_box.y < box->y) {
box->y = node_box.y;
if (node_box.y < *y_min) {
*y_min = node_box.y;
}
if (node_box.x + node_box.width > box->x + box->width) {
box->width = node_box.x + node_box.width - box->x;
if (node_box.x + node_box.width > *x_max) {
*x_max = node_box.x + node_box.width;
}
if (node_box.y + node_box.height > box->y + box->height) {
box->height = node_box.y + node_box.height - box->y;
if (node_box.y + node_box.height > *y_max) {
*y_max = node_box.y + node_box.height;
}
break;
}
}
static void get_scene_node_extents(struct wlr_scene_node *node, struct wlr_box *box) {
*box = (struct wlr_box){ .x = INT_MAX, .y = INT_MAX };
int lx = 0, ly = 0;
wlr_scene_node_coords(node, &lx, &ly);
_get_scene_node_extents(node, box, lx, ly);
*box = (struct wlr_box){ .x = INT_MAX, .y = INT_MAX };
int x_max = INT_MIN, y_max = INT_MIN;
_get_scene_node_extents(node, lx, ly, &box->x, &box->y, &x_max, &y_max);
box->width = x_max - box->x;
box->height = y_max - box->y;
}
static void source_render(struct scene_node_source *source) {

View file

@ -48,6 +48,7 @@ wlr_files += files(
'wlr_data_control_v1.c',
'wlr_drm.c',
'wlr_export_dmabuf_v1.c',
'wlr_ext_background_effect_v1.c',
'wlr_ext_data_control_v1.c',
'wlr_ext_foreign_toplevel_list_v1.c',
'wlr_ext_image_copy_capture_v1.c',

View file

@ -159,9 +159,8 @@ static void output_cursor_update_visible(struct wlr_output_cursor *cursor) {
struct wlr_box cursor_box;
output_cursor_get_box(cursor, &cursor_box);
struct wlr_box intersection;
cursor->visible =
wlr_box_intersection(&intersection, &output_box, &cursor_box);
wlr_box_intersects(&output_box, &cursor_box);
}
static bool output_pick_cursor_format(struct wlr_output *output,

View file

@ -233,6 +233,11 @@ static void output_apply_state(struct wlr_output *output,
output->transform = state->transform;
}
if (state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
output->color_encoding = state->color_encoding;
output->color_range = state->color_range;
}
if (state->committed & WLR_OUTPUT_STATE_IMAGE_DESCRIPTION) {
if (state->image_description != NULL) {
output->image_description_value = *state->image_description;
@ -580,6 +585,11 @@ static uint32_t output_compare_state(struct wlr_output *output,
output->color_transform == state->color_transform) {
fields |= WLR_OUTPUT_STATE_COLOR_TRANSFORM;
}
if ((state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) &&
output->color_encoding == state->color_encoding &&
output->color_range == state->color_range) {
fields |= WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
}
return fields;
}
@ -615,7 +625,7 @@ static bool output_basic_test(struct wlr_output *output,
};
struct wlr_box dst_box;
output_state_get_buffer_dst_box(state, &dst_box);
if (!wlr_box_intersection(&output_box, &output_box, &dst_box)) {
if (!wlr_box_intersects(&output_box, &dst_box)) {
wlr_log(WLR_ERROR, "Primary buffer is entirely off-screen or 0-sized");
return false;
}
@ -632,6 +642,10 @@ static bool output_basic_test(struct wlr_output *output,
wlr_log(WLR_DEBUG, "Tried to set signal timeline without a buffer");
return false;
}
if (state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
wlr_log(WLR_DEBUG, "Tried to set color representation without a buffer");
return false;
}
}
if (state->committed & WLR_OUTPUT_STATE_RENDER_FORMAT) {

View file

@ -141,6 +141,14 @@ bool wlr_output_state_set_image_description(struct wlr_output_state *state,
return true;
}
void wlr_output_state_set_color_encoding_and_range(
struct wlr_output_state *state,
enum wlr_color_encoding encoding, enum wlr_color_range range) {
state->committed |= WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
state->color_encoding = encoding;
state->color_range = range;
}
bool wlr_output_state_copy(struct wlr_output_state *dst,
const struct wlr_output_state *src) {
struct wlr_output_state copy = *src;

View file

@ -38,7 +38,7 @@ static struct wlr_output *get_surface_frame_pacing_output(struct wlr_surface *su
return frame_pacing_output;
}
static bool get_tf_preference(enum wlr_color_transfer_function tf) {
static int get_tf_preference(enum wlr_color_transfer_function tf) {
switch (tf) {
case WLR_COLOR_TRANSFER_FUNCTION_GAMMA22:
return 0;
@ -52,7 +52,7 @@ static bool get_tf_preference(enum wlr_color_transfer_function tf) {
abort(); // unreachable
}
static bool get_primaries_preference(enum wlr_color_named_primaries primaries) {
static int get_primaries_preference(enum wlr_color_named_primaries primaries) {
switch (primaries) {
case WLR_COLOR_NAMED_PRIMARIES_SRGB:
return 0;
@ -94,14 +94,44 @@ static void handle_scene_buffer_outputs_update(
struct wl_listener *listener, void *data) {
struct wlr_scene_surface *surface =
wl_container_of(listener, surface, outputs_update);
struct wlr_scene_outputs_update_event *event = data;
struct wlr_scene *scene = scene_node_get_root(&surface->buffer->node);
// If the surface is no longer visible on any output, keep the last sent
// preferred configuration to avoid unnecessary redraws
if (wl_list_empty(&surface->surface->current_outputs)) {
if (event->size == 0) {
return;
}
// To avoid sending redundant leave/enter events when a surface is hidden and then shown
// without moving to a different output the following policy is implemented:
//
// 1. When a surface transitions from being visible on >0 outputs to being visible on 0 outputs
// don't send any leave events.
//
// 2. When a surface transitions from being visible on 0 outputs to being visible on >0 outputs
// send leave events for all entered outputs on which the surface is no longer visible as
// well as enter events for any outputs not already entered.
struct wlr_surface_output *entered_output, *tmp;
wl_list_for_each_safe(entered_output, tmp, &surface->surface->current_outputs, link) {
bool active = false;
for (size_t i = 0; i < event->size; i++) {
if (entered_output->output == event->active[i]->output) {
active = true;
break;
}
}
if (!active) {
wlr_surface_send_leave(surface->surface, entered_output->output);
}
}
for (size_t i = 0; i < event->size; i++) {
// This function internally checks if an enter event was already sent for the output
// to avoid sending redundant events.
wlr_surface_send_enter(surface->surface, event->active[i]->output);
}
double scale = get_surface_preferred_buffer_scale(surface->surface);
wlr_fractional_scale_v1_notify_scale(surface->surface, scale);
wlr_surface_set_preferred_buffer_scale(surface->surface, ceil(scale));
@ -114,24 +144,6 @@ static void handle_scene_buffer_outputs_update(
}
}
static void handle_scene_buffer_output_enter(
struct wl_listener *listener, void *data) {
struct wlr_scene_surface *surface =
wl_container_of(listener, surface, output_enter);
struct wlr_scene_output *output = data;
wlr_surface_send_enter(surface->surface, output->output);
}
static void handle_scene_buffer_output_leave(
struct wl_listener *listener, void *data) {
struct wlr_scene_surface *surface =
wl_container_of(listener, surface, output_leave);
struct wlr_scene_output *output = data;
wlr_surface_send_leave(surface->surface, output->output);
}
static void handle_scene_buffer_output_sample(
struct wl_listener *listener, void *data) {
struct wlr_scene_surface *surface =
@ -147,6 +159,13 @@ static void handle_scene_buffer_output_sample(
} else {
wlr_presentation_surface_textured_on_output(surface->surface, output);
}
struct wlr_linux_drm_syncobj_surface_v1_state *syncobj_surface_state =
wlr_linux_drm_syncobj_v1_get_surface_state(surface->surface);
if (syncobj_surface_state != NULL && event->release_timeline != NULL) {
wlr_linux_drm_syncobj_v1_state_add_release_point(syncobj_surface_state,
event->release_timeline, event->release_point, output->event_loop);
}
}
static void handle_scene_buffer_frame_done(
@ -311,27 +330,15 @@ static void surface_reconfigure(struct wlr_scene_surface *scene_surface) {
struct wlr_linux_drm_syncobj_surface_v1_state *syncobj_surface_state =
wlr_linux_drm_syncobj_v1_get_surface_state(surface);
struct wlr_drm_syncobj_timeline *wait_timeline = NULL;
uint64_t wait_point = 0;
if (syncobj_surface_state != NULL) {
wait_timeline = syncobj_surface_state->acquire_timeline;
wait_point = syncobj_surface_state->acquire_point;
}
struct wlr_scene_buffer_set_buffer_options options = {
.damage = &surface->buffer_damage,
.wait_timeline = wait_timeline,
.wait_point = wait_point,
};
if (syncobj_surface_state != NULL) {
options.wait_timeline = syncobj_surface_state->acquire_timeline;
options.wait_point = syncobj_surface_state->acquire_point;
}
wlr_scene_buffer_set_buffer_with_options(scene_buffer,
&surface->buffer->base, &options);
if (syncobj_surface_state != NULL &&
(surface->current.committed & WLR_SURFACE_STATE_BUFFER) &&
surface->buffer->source != NULL) {
wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer(syncobj_surface_state,
surface->buffer->source);
}
} else {
wlr_scene_buffer_set_buffer(scene_buffer, NULL);
}
@ -354,10 +361,9 @@ static void handle_scene_surface_surface_commit(
// the surface anyway.
int lx, ly;
bool enabled = wlr_scene_node_coords(&scene_buffer->node, &lx, &ly);
if (!wl_list_empty(&surface->surface->current.frame_callback_list) &&
surface->buffer->primary_output != NULL && enabled) {
wlr_output_schedule_frame(surface->buffer->primary_output->output);
struct wlr_output *output = get_surface_frame_pacing_output(surface->surface);
if (!wl_list_empty(&surface->surface->current.frame_callback_list) && output && enabled) {
wlr_output_schedule_frame(output);
}
}
@ -380,8 +386,6 @@ static void surface_addon_destroy(struct wlr_addon *addon) {
wlr_addon_finish(&surface->addon);
wl_list_remove(&surface->outputs_update.link);
wl_list_remove(&surface->output_enter.link);
wl_list_remove(&surface->output_leave.link);
wl_list_remove(&surface->output_sample.link);
wl_list_remove(&surface->frame_done.link);
wl_list_remove(&surface->surface_destroy.link);
@ -427,12 +431,6 @@ struct wlr_scene_surface *wlr_scene_surface_create(struct wlr_scene_tree *parent
surface->outputs_update.notify = handle_scene_buffer_outputs_update;
wl_signal_add(&scene_buffer->events.outputs_update, &surface->outputs_update);
surface->output_enter.notify = handle_scene_buffer_output_enter;
wl_signal_add(&scene_buffer->events.output_enter, &surface->output_enter);
surface->output_leave.notify = handle_scene_buffer_output_leave;
wl_signal_add(&scene_buffer->events.output_leave, &surface->output_leave);
surface->output_sample.notify = handle_scene_buffer_output_sample;
wl_signal_add(&scene_buffer->events.output_sample, &surface->output_sample);

View file

@ -113,24 +113,11 @@ void wlr_scene_node_destroy(struct wlr_scene_node *node) {
if (node->type == WLR_SCENE_NODE_BUFFER) {
struct wlr_scene_buffer *scene_buffer = wlr_scene_buffer_from_node(node);
uint64_t active = scene_buffer->active_outputs;
if (active) {
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, &scene->outputs, link) {
if (active & (1ull << scene_output->index)) {
wl_signal_emit_mutable(&scene_buffer->events.output_leave,
scene_output);
}
}
}
scene_buffer_set_buffer(scene_buffer, NULL);
scene_buffer_set_texture(scene_buffer, NULL);
pixman_region32_fini(&scene_buffer->opaque_region);
wlr_drm_syncobj_timeline_unref(scene_buffer->wait_timeline);
assert(wl_list_empty(&scene_buffer->events.output_leave.listener_list));
assert(wl_list_empty(&scene_buffer->events.output_enter.listener_list));
assert(wl_list_empty(&scene_buffer->events.outputs_update.listener_list));
assert(wl_list_empty(&scene_buffer->events.output_sample.listener_list));
assert(wl_list_empty(&scene_buffer->events.frame_done.listener_list));
@ -238,7 +225,7 @@ static bool _scene_nodes_in_box(struct wlr_scene_node *node, struct wlr_box *box
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
if (wlr_box_intersection(&node_box, &node_box, box) &&
if (wlr_box_intersects(&node_box, box) &&
iterator(node, lx, ly, user_data)) {
return true;
}
@ -481,28 +468,12 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
(struct wlr_linux_dmabuf_feedback_v1_init_options){0};
}
uint64_t old_active = scene_buffer->active_outputs;
scene_buffer->active_outputs = active_outputs;
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, outputs, link) {
uint64_t mask = 1ull << scene_output->index;
bool intersects = active_outputs & mask;
bool intersects_before = old_active & mask;
if (intersects && !intersects_before) {
wl_signal_emit_mutable(&scene_buffer->events.output_enter, scene_output);
} else if (!intersects && intersects_before) {
wl_signal_emit_mutable(&scene_buffer->events.output_leave, scene_output);
}
}
// if there are active outputs on this node, we should always have a primary
// output
assert(!scene_buffer->active_outputs || scene_buffer->primary_output);
assert(!active_outputs || scene_buffer->primary_output);
// Skip output update event if nothing was updated
if (old_active == active_outputs &&
if (scene_buffer->active_outputs == active_outputs &&
(!force || ((1ull << force->index) & ~active_outputs)) &&
old_primary_output == scene_buffer->primary_output) {
return;
@ -515,6 +486,7 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
};
size_t i = 0;
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, outputs, link) {
if (~active_outputs & (1ull << scene_output->index)) {
continue;
@ -524,6 +496,7 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
outputs_array[i++] = scene_output;
}
scene_buffer->active_outputs = active_outputs;
wl_signal_emit_mutable(&scene_buffer->events.outputs_update, &event);
}
@ -869,8 +842,6 @@ struct wlr_scene_buffer *wlr_scene_buffer_create(struct wlr_scene_tree *parent,
scene_node_init(&scene_buffer->node, WLR_SCENE_NODE_BUFFER, parent);
wl_signal_init(&scene_buffer->events.outputs_update);
wl_signal_init(&scene_buffer->events.output_enter);
wl_signal_init(&scene_buffer->events.output_leave);
wl_signal_init(&scene_buffer->events.output_sample);
wl_signal_init(&scene_buffer->events.frame_done);
@ -1553,6 +1524,8 @@ static void scene_entry_render(struct render_list_entry *entry, const struct ren
struct wlr_scene_output_sample_event sample_event = {
.output = data->output,
.direct_scanout = false,
.release_timeline = data->output->in_timeline,
.release_point = data->output->in_point,
};
wl_signal_emit_mutable(&scene_buffer->events.output_sample, &sample_event);
@ -1785,7 +1758,10 @@ struct wlr_scene_output *wlr_scene_output_create(struct wlr_scene *scene,
if (drm_fd >= 0 && output->backend->features.timeline &&
output->renderer != NULL && output->renderer->features.timeline) {
scene_output->in_timeline = wlr_drm_syncobj_timeline_create(drm_fd);
if (scene_output->in_timeline == NULL) {
scene_output->out_timeline = wlr_drm_syncobj_timeline_create(drm_fd);
if (scene_output->in_timeline == NULL || scene_output->out_timeline == NULL) {
wlr_drm_syncobj_timeline_unref(scene_output->in_timeline);
wlr_drm_syncobj_timeline_unref(scene_output->out_timeline);
return NULL;
}
}
@ -1840,7 +1816,14 @@ void wlr_scene_output_destroy(struct wlr_scene_output *scene_output) {
wl_list_remove(&scene_output->output_commit.link);
wl_list_remove(&scene_output->output_damage.link);
wl_list_remove(&scene_output->output_needs_frame.link);
wlr_drm_syncobj_timeline_unref(scene_output->in_timeline);
if (scene_output->in_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(scene_output->in_timeline, UINT64_MAX);
wlr_drm_syncobj_timeline_unref(scene_output->in_timeline);
}
if (scene_output->out_timeline != NULL) {
wlr_drm_syncobj_timeline_signal(scene_output->out_timeline, UINT64_MAX);
wlr_drm_syncobj_timeline_unref(scene_output->out_timeline);
}
wlr_color_transform_unref(scene_output->gamma_lut_color_transform);
wlr_color_transform_unref(scene_output->prev_gamma_lut_color_transform);
wlr_color_transform_unref(scene_output->prev_supplied_color_transform);
@ -2086,15 +2069,6 @@ static enum scene_direct_scanout_result scene_entry_try_direct_scanout(
return SCANOUT_INELIGIBLE;
}
bool is_color_repr_none = buffer->color_encoding == WLR_COLOR_ENCODING_NONE &&
buffer->color_range == WLR_COLOR_RANGE_NONE;
bool is_color_repr_identity_full = buffer->color_encoding == WLR_COLOR_ENCODING_IDENTITY &&
buffer->color_range == WLR_COLOR_RANGE_FULL;
if (!(is_color_repr_none || is_color_repr_identity_full)) {
return SCANOUT_INELIGIBLE;
}
// We want to ensure optimal buffer selection, but as direct-scanout can be enabled and disabled
// on a frame-by-frame basis, we wait for a few frames to send the new format recommendations.
// Maybe we should only send feedback in this case if tests fail.
@ -2136,6 +2110,23 @@ static enum scene_direct_scanout_result scene_entry_try_direct_scanout(
if (buffer->wait_timeline != NULL) {
wlr_output_state_set_wait_timeline(&pending, buffer->wait_timeline, buffer->wait_point);
}
if (scene_output->out_timeline) {
scene_output->out_point++;
wlr_output_state_set_signal_timeline(&pending, scene_output->out_timeline, scene_output->out_point);
}
if (buffer->color_encoding == WLR_COLOR_ENCODING_IDENTITY &&
buffer->color_range == WLR_COLOR_RANGE_FULL) {
// IDENTITY+FULL (used for RGB formats) is equivalent to no color
// representation being set at all.
wlr_output_state_set_color_encoding_and_range(&pending,
WLR_COLOR_ENCODING_NONE, WLR_COLOR_RANGE_NONE);
} else {
wlr_output_state_set_color_encoding_and_range(&pending,
buffer->color_encoding, buffer->color_range);
}
if (!wlr_output_test_state(scene_output->output, &pending)) {
wlr_output_state_finish(&pending);
return SCANOUT_CANDIDATE;
@ -2147,6 +2138,8 @@ static enum scene_direct_scanout_result scene_entry_try_direct_scanout(
struct wlr_scene_output_sample_event sample_event = {
.output = scene_output,
.direct_scanout = true,
.release_timeline = data->output->out_timeline,
.release_point = data->output->out_point,
};
wl_signal_emit_mutable(&buffer->events.output_sample, &sample_event);
return SCANOUT_SUCCESS;
@ -2606,6 +2599,9 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
if (scene_output->in_timeline != NULL) {
wlr_output_state_set_wait_timeline(state, scene_output->in_timeline,
scene_output->in_point);
scene_output->out_point++;
wlr_output_state_set_signal_timeline(state, scene_output->out_timeline,
scene_output->out_point);
}
if (!render_gamma_lut) {
@ -2673,8 +2669,7 @@ static void scene_output_for_each_scene_buffer(const struct wlr_box *output_box,
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
struct wlr_box intersection;
if (wlr_box_intersection(&intersection, output_box, &node_box)) {
if (wlr_box_intersects(output_box, &node_box)) {
struct wlr_scene_buffer *scene_buffer =
wlr_scene_buffer_from_node(node);
user_iterator(scene_buffer, lx, ly, user_data);

View file

@ -74,6 +74,41 @@ static float decode_cie1931_coord(int32_t raw) {
return (float)raw / (1000 * 1000);
}
static bool cie1931_xy_equal(const struct wlr_color_cie1931_xy *a,
const struct wlr_color_cie1931_xy *b) {
return a->x == b->x && a->y == b->y;
}
static bool primaries_equal(const struct wlr_color_primaries *a,
const struct wlr_color_primaries *b) {
return cie1931_xy_equal(&a->red, &b->red) &&
cie1931_xy_equal(&a->green, &b->green) &&
cie1931_xy_equal(&a->blue, &b->blue) &&
cie1931_xy_equal(&a->white, &b->white);
}
static bool img_desc_data_equal(const struct wlr_image_description_v1_data *a,
const struct wlr_image_description_v1_data *b) {
if (a->tf_named != b->tf_named ||
a->primaries_named != b->primaries_named ||
a->has_mastering_display_primaries != b->has_mastering_display_primaries ||
a->has_mastering_luminance != b->has_mastering_luminance ||
a->max_cll != b->max_cll ||
a->max_fall != b->max_fall) {
return false;
}
if (a->has_mastering_display_primaries &&
!primaries_equal(&a->mastering_display_primaries, &b->mastering_display_primaries)) {
return false;
}
if (a->has_mastering_luminance &&
(a->mastering_luminance.min != b->mastering_luminance.min ||
a->mastering_luminance.max != b->mastering_luminance.max)) {
return false;
}
return true;
}
static const struct wp_image_description_v1_interface image_desc_impl;
static struct wlr_image_description_v1 *image_desc_from_resource(struct wl_resource *resource) {
@ -1002,16 +1037,20 @@ void wlr_color_manager_v1_set_surface_preferred_image_description(
struct wlr_color_management_surface_feedback_v1 *surface_feedback;
wl_list_for_each(surface_feedback, &manager->surface_feedbacks, link) {
if (surface_feedback->surface == surface) {
surface_feedback->data = *data;
uint32_t version = wl_resource_get_version(surface_feedback->resource);
if (version >= WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1_PREFERRED_CHANGED2_SINCE_VERSION) {
wp_color_management_surface_feedback_v1_send_preferred_changed2(
surface_feedback->resource, identity_hi, identity_lo);
} else {
wp_color_management_surface_feedback_v1_send_preferred_changed(
surface_feedback->resource, identity_lo);
}
if (surface_feedback->surface != surface ||
img_desc_data_equal(&surface_feedback->data, data)) {
continue;
}
surface_feedback->data = *data;
uint32_t version = wl_resource_get_version(surface_feedback->resource);
if (version >= WP_COLOR_MANAGEMENT_SURFACE_FEEDBACK_V1_PREFERRED_CHANGED2_SINCE_VERSION) {
wp_color_management_surface_feedback_v1_send_preferred_changed2(
surface_feedback->resource, identity_hi, identity_lo);
} else {
wp_color_management_surface_feedback_v1_send_preferred_changed(
surface_feedback->resource, identity_lo);
}
}
}

View file

@ -246,40 +246,40 @@ static void surface_finalize_pending(struct wlr_surface *surface) {
}
static void surface_update_damage(pixman_region32_t *buffer_damage,
struct wlr_surface_state *current, struct wlr_surface_state *pending) {
struct wlr_surface_state *state) {
pixman_region32_clear(buffer_damage);
// Copy over surface damage + buffer damage
pixman_region32_t surface_damage;
pixman_region32_init(&surface_damage);
pixman_region32_copy(&surface_damage, &pending->surface_damage);
pixman_region32_copy(&surface_damage, &state->surface_damage);
if (pending->viewport.has_dst) {
if (state->viewport.has_dst) {
int src_width, src_height;
surface_state_viewport_src_size(pending, &src_width, &src_height);
float scale_x = (float)pending->viewport.dst_width / src_width;
float scale_y = (float)pending->viewport.dst_height / src_height;
surface_state_viewport_src_size(state, &src_width, &src_height);
float scale_x = (float)state->viewport.dst_width / src_width;
float scale_y = (float)state->viewport.dst_height / src_height;
wlr_region_scale_xy(&surface_damage, &surface_damage,
1.0 / scale_x, 1.0 / scale_y);
}
if (pending->viewport.has_src) {
if (state->viewport.has_src) {
// This is lossy: do a best-effort conversion
pixman_region32_translate(&surface_damage,
floor(pending->viewport.src.x),
floor(pending->viewport.src.y));
floor(state->viewport.src.x),
floor(state->viewport.src.y));
}
wlr_region_scale(&surface_damage, &surface_damage, pending->scale);
wlr_region_scale(&surface_damage, &surface_damage, state->scale);
int width, height;
surface_state_transformed_buffer_size(pending, &width, &height);
surface_state_transformed_buffer_size(state, &width, &height);
wlr_region_transform(&surface_damage, &surface_damage,
wlr_output_transform_invert(pending->transform),
wlr_output_transform_invert(state->transform),
width, height);
pixman_region32_union(buffer_damage,
&pending->buffer_damage, &surface_damage);
&state->buffer_damage, &surface_damage);
pixman_region32_fini(&surface_damage);
}
@ -521,8 +521,6 @@ static void surface_commit_state(struct wlr_surface *surface,
surface->unmap_commit = false;
}
surface_update_damage(&surface->buffer_damage, &surface->current, next);
surface->previous.scale = surface->current.scale;
surface->previous.transform = surface->current.transform;
surface->previous.width = surface->current.width;
@ -531,6 +529,7 @@ static void surface_commit_state(struct wlr_surface *surface,
surface->previous.buffer_height = surface->current.buffer_height;
surface_state_move(&surface->current, next, surface);
surface_update_damage(&surface->buffer_damage, &surface->current);
if (invalid_buffer) {
surface_apply_damage(surface);

View file

@ -0,0 +1,241 @@
#include <assert.h>
#include <stdlib.h>
#include <wlr/types/wlr_ext_background_effect_v1.h>
#include <wlr/types/wlr_compositor.h>
#include <wlr/util/addon.h>
#include "ext-background-effect-v1-protocol.h"
#define BACKGROUND_EFFECT_VERSION 1
struct wlr_ext_background_effect_surface_v1 {
struct wl_resource *resource;
struct wlr_surface *surface;
struct wlr_addon addon;
struct wlr_surface_synced synced;
struct wlr_ext_background_effect_surface_v1_state pending, current;
};
static const struct ext_background_effect_surface_v1_interface surface_impl;
static const struct ext_background_effect_manager_v1_interface manager_impl;
static struct wlr_ext_background_effect_surface_v1 *surface_from_resource(
struct wl_resource *resource) {
assert(wl_resource_instance_of(resource,
&ext_background_effect_surface_v1_interface, &surface_impl));
return wl_resource_get_user_data(resource);
}
static void surface_destroy(struct wlr_ext_background_effect_surface_v1 *surface) {
if (surface == NULL) {
return;
}
wlr_surface_synced_finish(&surface->synced);
wlr_addon_finish(&surface->addon);
wl_resource_set_user_data(surface->resource, NULL);
free(surface);
}
static void surface_handle_resource_destroy(struct wl_resource *resource) {
struct wlr_ext_background_effect_surface_v1 *surface = surface_from_resource(resource);
surface_destroy(surface);
}
static void surface_handle_destroy(struct wl_client *client, struct wl_resource *resource) {
wl_resource_destroy(resource);
}
static void surface_handle_set_blur_region(struct wl_client *client, struct wl_resource *resource,
struct wl_resource *region_resource) {
struct wlr_ext_background_effect_surface_v1 *surface = surface_from_resource(resource);
if (surface == NULL) {
wl_resource_post_error(resource,
EXT_BACKGROUND_EFFECT_SURFACE_V1_ERROR_SURFACE_DESTROYED,
"The wl_surface object has been destroyed");
return;
}
if (region_resource != NULL) {
const pixman_region32_t *region = wlr_region_from_resource(region_resource);
pixman_region32_copy(&surface->pending.blur_region, region);
} else {
pixman_region32_clear(&surface->pending.blur_region);
}
}
static const struct ext_background_effect_surface_v1_interface surface_impl = {
.destroy = surface_handle_destroy,
.set_blur_region = surface_handle_set_blur_region,
};
static void surface_synced_init_state(void *_state) {
struct wlr_ext_background_effect_surface_v1_state *state = _state;
pixman_region32_init(&state->blur_region);
}
static void surface_synced_finish_state(void *_state) {
struct wlr_ext_background_effect_surface_v1_state *state = _state;
pixman_region32_fini(&state->blur_region);
}
static void surface_synced_move_state(void *_dst, void *_src) {
struct wlr_ext_background_effect_surface_v1_state *dst = _dst;
struct wlr_ext_background_effect_surface_v1_state *src = _src;
pixman_region32_copy(&dst->blur_region, &src->blur_region);
}
static const struct wlr_surface_synced_impl surface_synced_impl = {
.state_size = sizeof(struct wlr_ext_background_effect_surface_v1_state),
.init_state = surface_synced_init_state,
.finish_state = surface_synced_finish_state,
.move_state = surface_synced_move_state,
};
static void surface_addon_destroy(struct wlr_addon *addon) {
struct wlr_ext_background_effect_surface_v1 *surface = wl_container_of(addon, surface, addon);
surface_destroy(surface);
}
static const struct wlr_addon_interface surface_addon_impl = {
.name = "ext_background_effect_surface_v1",
.destroy = surface_addon_destroy,
};
static struct wlr_ext_background_effect_surface_v1 *surface_from_wlr_surface(
struct wlr_surface *wlr_surface) {
struct wlr_addon *addon = wlr_addon_find(&wlr_surface->addons, NULL, &surface_addon_impl);
if (addon == NULL) {
return NULL;
}
struct wlr_ext_background_effect_surface_v1 *surface =
wl_container_of(addon, surface, addon);
return surface;
}
static void manager_handle_destroy(struct wl_client *client, struct wl_resource *resource) {
wl_resource_destroy(resource);
}
static void manager_handle_get_background_effect(struct wl_client *client,
struct wl_resource *manager_resource, uint32_t id, struct wl_resource *surface_resource) {
struct wlr_surface *wlr_surface = wlr_surface_from_resource(surface_resource);
if (surface_from_wlr_surface(wlr_surface) != NULL) {
wl_resource_post_error(manager_resource,
EXT_BACKGROUND_EFFECT_MANAGER_V1_ERROR_BACKGROUND_EFFECT_EXISTS,
"The wl_surface object already has a ext_background_effect_surface_v1 object");
return;
}
struct wlr_ext_background_effect_surface_v1 *surface = calloc(1, sizeof(*surface));
if (surface == NULL) {
wl_resource_post_no_memory(manager_resource);
return;
}
if (!wlr_surface_synced_init(&surface->synced, wlr_surface, &surface_synced_impl,
&surface->pending, &surface->current)) {
free(surface);
wl_resource_post_no_memory(manager_resource);
return;
}
uint32_t version = wl_resource_get_version(manager_resource);
surface->resource =
wl_resource_create(client, &ext_background_effect_surface_v1_interface, version, id);
if (surface->resource == NULL) {
wlr_surface_synced_finish(&surface->synced);
free(surface);
wl_resource_post_no_memory(manager_resource);
return;
}
wl_resource_set_implementation(surface->resource, &surface_impl, surface,
surface_handle_resource_destroy);
surface->surface = wlr_surface;
wlr_addon_init(&surface->addon, &wlr_surface->addons, NULL, &surface_addon_impl);
}
static const struct ext_background_effect_manager_v1_interface manager_impl = {
.destroy = manager_handle_destroy,
.get_background_effect = manager_handle_get_background_effect,
};
static void manager_handle_resource_destroy(struct wl_resource *resource) {
wl_list_remove(wl_resource_get_link(resource));
}
static void manager_bind(struct wl_client *wl_client, void *data, uint32_t version, uint32_t id) {
struct wlr_ext_background_effect_manager_v1 *manager = data;
struct wl_resource *resource = wl_resource_create(wl_client,
&ext_background_effect_manager_v1_interface, version, id);
if (resource == NULL) {
wl_client_post_no_memory(wl_client);
return;
}
wl_resource_set_implementation(resource, &manager_impl, manager,
manager_handle_resource_destroy);
wl_list_insert(&manager->resources, wl_resource_get_link(resource));
ext_background_effect_manager_v1_send_capabilities(resource, manager->capabilities);
}
static void handle_display_destroy(struct wl_listener *listener, void *data) {
struct wlr_ext_background_effect_manager_v1 *manager =
wl_container_of(listener, manager, display_destroy);
wl_signal_emit_mutable(&manager->events.destroy, NULL);
assert(wl_list_empty(&manager->events.destroy.listener_list));
wl_global_destroy(manager->global);
wl_list_remove(&manager->display_destroy.link);
free(manager);
}
struct wlr_ext_background_effect_manager_v1 *wlr_ext_background_effect_manager_v1_create(
struct wl_display *display, uint32_t version, uint32_t capabilities) {
assert(version <= BACKGROUND_EFFECT_VERSION);
struct wlr_ext_background_effect_manager_v1 *manager = calloc(1, sizeof(*manager));
if (manager == NULL) {
return NULL;
}
manager->global = wl_global_create(display, &ext_background_effect_manager_v1_interface, version,
manager, manager_bind);
if (manager->global == NULL) {
free(manager);
return NULL;
}
manager->capabilities = capabilities;
wl_signal_init(&manager->events.destroy);
wl_list_init(&manager->resources);
manager->display_destroy.notify = handle_display_destroy;
wl_display_add_destroy_listener(display, &manager->display_destroy);
return manager;
}
const struct wlr_ext_background_effect_surface_v1_state *
wlr_ext_background_effect_v1_get_surface_state(struct wlr_surface *wlr_surface) {
struct wlr_ext_background_effect_surface_v1 *surface =
surface_from_wlr_surface(wlr_surface);
if (surface == NULL) {
return NULL;
}
return &surface->current;
}

View file

@ -168,7 +168,7 @@ wlr_ext_foreign_toplevel_handle_v1_create(struct wlr_ext_foreign_toplevel_list_v
return NULL;
}
wl_list_insert(&list->toplevels, &toplevel->link);
wl_list_insert(list->toplevels.prev, &toplevel->link);
toplevel->list = list;
if (state->app_id) {
toplevel->app_id = strdup(state->app_id);

View file

@ -518,14 +518,16 @@ static void cursor_session_handle_get_capture_session(struct wl_client *client,
return;
}
cursor_session->capture_session_created = true;
struct wlr_ext_image_copy_capture_manager_v1 *manager = NULL;
struct wlr_ext_image_capture_source_v1 *source = NULL;
if (cursor_session != NULL) {
manager = cursor_session->manager;
cursor_session->capture_session_created = true;
source = &cursor_session->source->base;
}
session_create(cursor_session_resource, new_id, source, 0, cursor_session->manager);
session_create(cursor_session_resource, new_id, source, 0, manager);
}
static const struct ext_image_copy_capture_cursor_session_v1_interface cursor_session_impl = {

View file

@ -790,7 +790,7 @@ struct wlr_ext_workspace_handle_v1 *wlr_ext_workspace_handle_v1_create(
wl_array_init(&workspace->coordinates);
wl_signal_init(&workspace->events.destroy);
wl_list_insert(&manager->workspaces, &workspace->link);
wl_list_insert(manager->workspaces.prev, &workspace->link);
struct wlr_ext_workspace_manager_v1_resource *manager_res;
wl_list_for_each(manager_res, &manager->resources, link) {

View file

@ -530,7 +530,7 @@ wlr_foreign_toplevel_handle_v1_create(
return NULL;
}
wl_list_insert(&manager->toplevels, &toplevel->link);
wl_list_insert(manager->toplevels.prev, &toplevel->link);
toplevel->manager = manager;
wl_list_init(&toplevel->resources);

View file

@ -84,6 +84,16 @@ void wlr_keyboard_notify_modifiers(struct wlr_keyboard *keyboard,
uint32_t mods_depressed, uint32_t mods_latched, uint32_t mods_locked,
uint32_t group) {
if (keyboard->xkb_state == NULL) {
if (keyboard->modifiers.depressed != mods_depressed ||
keyboard->modifiers.latched != mods_latched ||
keyboard->modifiers.locked != mods_locked ||
keyboard->modifiers.group != group) {
keyboard->modifiers.depressed = mods_depressed;
keyboard->modifiers.latched = mods_latched;
keyboard->modifiers.locked = mods_locked;
keyboard->modifiers.group = group;
wl_signal_emit_mutable(&keyboard->events.modifiers, keyboard);
}
return;
}
xkb_state_update_mask(keyboard->xkb_state, mods_depressed, mods_latched,

View file

@ -394,6 +394,7 @@ static void layer_surface_role_commit(struct wlr_surface *wlr_surface) {
layer_surface_reset(surface);
assert(!surface->initialized);
assert(!surface->initial_commit);
surface->initial_commit = false;
} else {
surface->initial_commit = !surface->initialized;

View file

@ -3,6 +3,7 @@
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <wayland-server-core.h>
#include <wlr/render/drm_syncobj.h>
#include <wlr/types/wlr_buffer.h>
#include <wlr/types/wlr_compositor.h>
@ -11,6 +12,8 @@
#include <xf86drm.h>
#include "config.h"
#include "linux-drm-syncobj-v1-protocol.h"
#include "render/dmabuf.h"
#include "render/drm_syncobj_merger.h"
#define LINUX_DRM_SYNCOBJ_V1_VERSION 1
@ -158,20 +161,39 @@ static void surface_synced_finish_state(void *_state) {
struct wlr_linux_drm_syncobj_surface_v1_state *state = _state;
wlr_drm_syncobj_timeline_unref(state->acquire_timeline);
wlr_drm_syncobj_timeline_unref(state->release_timeline);
wlr_drm_syncobj_merger_unref(state->release_merger);
}
static void surface_synced_move_state(void *_dst, void *_src) {
struct wlr_linux_drm_syncobj_surface_v1_state *dst = _dst, *src = _src;
// TODO: immediately signal dst.release_timeline if necessary
if (src->acquire_timeline == NULL) {
// ignore commits that did not attach a buffer
return;
}
surface_synced_finish_state(dst);
*dst = *src;
dst->committed = true;
*src = (struct wlr_linux_drm_syncobj_surface_v1_state){0};
}
static void surface_synced_commit(struct wlr_surface_synced *synced) {
struct wlr_linux_drm_syncobj_surface_v1 *surface = wl_container_of(synced, surface, synced);
if (!surface->current.committed) {
return;
}
surface->current.release_merger = wlr_drm_syncobj_merger_create(
surface->current.release_timeline, surface->current.release_point);
surface->current.committed = false;
}
static const struct wlr_surface_synced_impl surface_synced_impl = {
.state_size = sizeof(struct wlr_linux_drm_syncobj_surface_v1_state),
.finish_state = surface_synced_finish_state,
.move_state = surface_synced_move_state,
.commit = surface_synced_commit,
};
static void manager_handle_destroy(struct wl_client *client,
@ -422,6 +444,11 @@ struct wlr_linux_drm_syncobj_manager_v1 *wlr_linux_drm_syncobj_manager_v1_create
struct wl_display *display, uint32_t version, int drm_fd) {
assert(version <= LINUX_DRM_SYNCOBJ_V1_VERSION);
if (!HAVE_LINUX_SYNC_FILE) {
wlr_log(WLR_INFO, "Linux sync_file unavailable, disabling linux-drm-syncobj-v1");
return NULL;
}
if (!check_syncobj_eventfd(drm_fd)) {
wlr_log(WLR_INFO, "DRM syncobj eventfd unavailable, disabling linux-drm-syncobj-v1");
return NULL;
@ -467,20 +494,14 @@ wlr_linux_drm_syncobj_v1_get_surface_state(struct wlr_surface *wlr_surface) {
}
struct release_signaller {
struct wlr_drm_syncobj_timeline *timeline;
uint64_t point;
struct wlr_drm_syncobj_merger *merger;
struct wl_listener buffer_release;
};
static void release_signaller_handle_buffer_release(struct wl_listener *listener, void *data) {
struct release_signaller *signaller = wl_container_of(listener, signaller, buffer_release);
if (drmSyncobjTimelineSignal(signaller->timeline->drm_fd, &signaller->timeline->handle,
&signaller->point, 1) != 0) {
wlr_log(WLR_ERROR, "drmSyncobjTimelineSignal() failed");
}
wlr_drm_syncobj_timeline_unref(signaller->timeline);
wlr_drm_syncobj_merger_unref(signaller->merger);
wl_list_remove(&signaller->buffer_release.link);
free(signaller);
}
@ -488,7 +509,7 @@ static void release_signaller_handle_buffer_release(struct wl_listener *listener
bool wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer(
struct wlr_linux_drm_syncobj_surface_v1_state *state, struct wlr_buffer *buffer) {
assert(buffer->n_locks > 0);
if (state->release_timeline == NULL) {
if (state->release_merger == NULL) {
// This can happen if an existing surface with a buffer has a
// syncobj_surface_v1_state created but no new buffer with release
// timeline committed.
@ -500,11 +521,32 @@ bool wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer(
return false;
}
signaller->timeline = wlr_drm_syncobj_timeline_ref(state->release_timeline);
signaller->point = state->release_point;
signaller->merger = wlr_drm_syncobj_merger_ref(state->release_merger);
signaller->buffer_release.notify = release_signaller_handle_buffer_release;
wl_signal_add(&buffer->events.release, &signaller->buffer_release);
return true;
}
bool wlr_linux_drm_syncobj_v1_state_add_release_point(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_drm_syncobj_timeline *release_timeline, uint64_t release_point,
struct wl_event_loop *event_loop) {
if (state->release_merger == NULL) {
// This can happen if an existing surface with a buffer has a
// syncobj_surface_v1_state created but no new buffer with release
// timeline committed.
return true;
}
return wlr_drm_syncobj_merger_add(state->release_merger,
release_timeline, release_point, event_loop);
}
bool wlr_linux_drm_syncobj_v1_state_add_release_from_implicit_sync(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop) {
if (state->release_merger != NULL) {
return true;
}
return wlr_drm_syncobj_merger_add_dmabuf(state->release_merger, buffer, event_loop);
}

View file

@ -260,14 +260,12 @@ bool wlr_output_layout_contains_point(struct wlr_output_layout *layout,
bool wlr_output_layout_intersects(struct wlr_output_layout *layout,
struct wlr_output *reference, const struct wlr_box *target_lbox) {
struct wlr_box out_box;
if (reference == NULL) {
struct wlr_output_layout_output *l_output;
wl_list_for_each(l_output, &layout->outputs, link) {
struct wlr_box output_box;
output_layout_output_get_box(l_output, &output_box);
if (wlr_box_intersection(&out_box, &output_box, target_lbox)) {
if (wlr_box_intersects(&output_box, target_lbox)) {
return true;
}
}
@ -281,7 +279,7 @@ bool wlr_output_layout_intersects(struct wlr_output_layout *layout,
struct wlr_box output_box;
output_layout_output_get_box(l_output, &output_box);
return wlr_box_intersection(&out_box, &output_box, target_lbox);
return wlr_box_intersects(&output_box, target_lbox);
}
}

View file

@ -15,7 +15,7 @@ static const struct wlr_keyboard_impl keyboard_impl = {
static const struct zwp_virtual_keyboard_v1_interface virtual_keyboard_impl;
static struct wlr_virtual_keyboard_v1 *virtual_keyboard_from_resource(
struct wlr_virtual_keyboard_v1 *wlr_virtual_keyboard_v1_from_resource(
struct wl_resource *resource) {
assert(wl_resource_instance_of(resource,
&zwp_virtual_keyboard_v1_interface, &virtual_keyboard_impl));
@ -39,7 +39,7 @@ static void virtual_keyboard_keymap(struct wl_client *client,
struct wl_resource *resource, uint32_t format, int32_t fd,
uint32_t size) {
struct wlr_virtual_keyboard_v1 *keyboard =
virtual_keyboard_from_resource(resource);
wlr_virtual_keyboard_v1_from_resource(resource);
if (keyboard == NULL) {
return;
}
@ -52,7 +52,7 @@ static void virtual_keyboard_keymap(struct wl_client *client,
if (data == MAP_FAILED) {
goto fd_fail;
}
struct xkb_keymap *keymap = xkb_keymap_new_from_string(context, data,
struct xkb_keymap *keymap = xkb_keymap_new_from_buffer(context, data, size,
XKB_KEYMAP_FORMAT_TEXT_V1, XKB_KEYMAP_COMPILE_NO_FLAGS);
munmap(data, size);
if (!keymap) {
@ -76,7 +76,7 @@ static void virtual_keyboard_key(struct wl_client *client,
struct wl_resource *resource, uint32_t time, uint32_t key,
uint32_t state) {
struct wlr_virtual_keyboard_v1 *keyboard =
virtual_keyboard_from_resource(resource);
wlr_virtual_keyboard_v1_from_resource(resource);
if (keyboard == NULL) {
return;
}
@ -99,7 +99,7 @@ static void virtual_keyboard_modifiers(struct wl_client *client,
struct wl_resource *resource, uint32_t mods_depressed,
uint32_t mods_latched, uint32_t mods_locked, uint32_t group) {
struct wlr_virtual_keyboard_v1 *keyboard =
virtual_keyboard_from_resource(resource);
wlr_virtual_keyboard_v1_from_resource(resource);
if (keyboard == NULL) {
return;
}
@ -113,21 +113,24 @@ static void virtual_keyboard_modifiers(struct wl_client *client,
mods_depressed, mods_latched, mods_locked, group);
}
static void virtual_keyboard_destroy_resource(struct wl_resource *resource) {
struct wlr_virtual_keyboard_v1 *keyboard =
virtual_keyboard_from_resource(resource);
if (keyboard == NULL) {
return;
}
wlr_keyboard_finish(&keyboard->keyboard);
wl_resource_set_user_data(keyboard->resource, NULL);
wl_list_remove(&keyboard->link);
free(keyboard);
static void virtual_keyboard_destroy(struct wlr_virtual_keyboard_v1 *virtual_keyboard) {
wlr_keyboard_finish(&virtual_keyboard->keyboard);
wl_resource_set_user_data(virtual_keyboard->resource, NULL);
wl_list_remove(&virtual_keyboard->seat_destroy.link);
wl_list_remove(&virtual_keyboard->link);
free(virtual_keyboard);
}
static void virtual_keyboard_destroy(struct wl_client *client,
static void virtual_keyboard_destroy_resource(struct wl_resource *resource) {
struct wlr_virtual_keyboard_v1 *virtual_keyboard =
wlr_virtual_keyboard_v1_from_resource(resource);
if (virtual_keyboard == NULL) {
return;
}
virtual_keyboard_destroy(virtual_keyboard);
}
static void virtual_keyboard_handle_destroy(struct wl_client *client,
struct wl_resource *resource) {
wl_resource_destroy(resource);
}
@ -136,7 +139,7 @@ static const struct zwp_virtual_keyboard_v1_interface virtual_keyboard_impl = {
.keymap = virtual_keyboard_keymap,
.key = virtual_keyboard_key,
.modifiers = virtual_keyboard_modifiers,
.destroy = virtual_keyboard_destroy,
.destroy = virtual_keyboard_handle_destroy,
};
static const struct zwp_virtual_keyboard_manager_v1_interface manager_impl;
@ -148,6 +151,13 @@ static struct wlr_virtual_keyboard_manager_v1 *manager_from_resource(
return wl_resource_get_user_data(resource);
}
static void virtual_keyboard_handle_seat_destroy(struct wl_listener *listener,
void *data) {
struct wlr_virtual_keyboard_v1 *virtual_keyboard = wl_container_of(listener, virtual_keyboard,
seat_destroy);
virtual_keyboard_destroy(virtual_keyboard);
}
static void virtual_keyboard_manager_create_virtual_keyboard(
struct wl_client *client, struct wl_resource *resource,
struct wl_resource *seat, uint32_t id) {
@ -181,6 +191,9 @@ static void virtual_keyboard_manager_create_virtual_keyboard(
virtual_keyboard->seat = seat_client->seat;
wl_resource_set_user_data(keyboard_resource, virtual_keyboard);
wl_signal_add(&seat_client->events.destroy, &virtual_keyboard->seat_destroy);
virtual_keyboard->seat_destroy.notify = virtual_keyboard_handle_seat_destroy;
wl_list_insert(&manager->virtual_keyboards, &virtual_keyboard->link);
wl_signal_emit_mutable(&manager->events.new_virtual_keyboard,

View file

@ -320,6 +320,7 @@ static void xdg_surface_role_commit(struct wlr_surface *wlr_surface) {
reset_xdg_surface_role_object(surface);
reset_xdg_surface(surface);
assert(!surface->initialized);
assert(!surface->initial_commit);
surface->initial_commit = false;
} else {

View file

@ -4,6 +4,14 @@
#include <wlr/util/box.h>
#include <wlr/util/log.h>
static int max(int a, int b) {
return a > b ? a : b;
}
static int min(int a, int b) {
return a < b ? a : b;
}
void wlr_box_closest_point(const struct wlr_box *box, double x, double y,
double *dest_x, double *dest_y) {
// if box is empty, then it contains no points, so no closest point either
@ -56,10 +64,10 @@ bool wlr_box_intersection(struct wlr_box *dest, const struct wlr_box *box_a,
return false;
}
int x1 = fmax(box_a->x, box_b->x);
int y1 = fmax(box_a->y, box_b->y);
int x2 = fmin(box_a->x + box_a->width, box_b->x + box_b->width);
int y2 = fmin(box_a->y + box_a->height, box_b->y + box_b->height);
int x1 = max(box_a->x, box_b->x);
int y1 = max(box_a->y, box_b->y);
int x2 = min(box_a->x + box_a->width, box_b->x + box_b->width);
int y2 = min(box_a->y + box_a->height, box_b->y + box_b->height);
dest->x = x1;
dest->y = y1;
@ -94,6 +102,15 @@ bool wlr_box_contains_box(const struct wlr_box *bigger, const struct wlr_box *sm
smaller->y + smaller->height <= bigger->y + bigger->height;
}
bool wlr_box_intersects(const struct wlr_box *a, const struct wlr_box *b) {
if (wlr_box_empty(a) || wlr_box_empty(b)) {
return false;
}
return a->x < b->x + b->width && b->x < a->x + a->width &&
a->y < b->y + b->height && b->y < a->y + a->height;
}
void wlr_box_transform(struct wlr_box *dest, const struct wlr_box *box,
enum wl_output_transform transform, int width, int height) {
struct wlr_box src = {0};

View file

@ -1,15 +1,15 @@
#include <limits.h>
#include "util/rect_union.h"
static void box_union(pixman_box32_t *dst, pixman_box32_t box) {
dst->x1 = dst->x1 < box.x1 ? dst->x1 : box.x1;
dst->y1 = dst->y1 < box.y1 ? dst->y1 : box.y1;
dst->x2 = dst->x2 > box.x2 ? dst->x2 : box.x2;
dst->y2 = dst->y2 > box.y2 ? dst->y2 : box.y2;
static void box_union(pixman_box32_t *dst, const pixman_box32_t *box) {
dst->x1 = dst->x1 < box->x1 ? dst->x1 : box->x1;
dst->y1 = dst->y1 < box->y1 ? dst->y1 : box->y1;
dst->x2 = dst->x2 > box->x2 ? dst->x2 : box->x2;
dst->y2 = dst->y2 > box->y2 ? dst->y2 : box->y2;
}
static bool box_empty_or_invalid(pixman_box32_t box) {
return box.x1 >= box.x2 || box.y1 >= box.y2;
static bool box_empty_or_invalid(const pixman_box32_t *box) {
return box->x1 >= box->x2 || box->y1 >= box->y2;
}
void rect_union_init(struct rect_union *ru) {
@ -37,20 +37,28 @@ static void handle_alloc_failure(struct rect_union *ru) {
wl_array_init(&ru->unsorted);
}
void rect_union_add(struct rect_union *ru, pixman_box32_t box) {
void rect_union_add(struct rect_union *ru, const pixman_box32_t *box) {
if (box_empty_or_invalid(box)) {
return;
}
box_union(&ru->bounding_box, box);
if (!ru->alloc_failure) {
pixman_box32_t *entry = wl_array_add(&ru->unsorted, sizeof(*entry));
if (entry) {
*entry = box;
} else {
handle_alloc_failure(ru);
}
if (ru->alloc_failure) {
return;
}
int nrects = (int)(ru->unsorted.size / sizeof(pixman_box32_t));
if (nrects >= 1024) {
handle_alloc_failure(ru);
return;
}
pixman_box32_t *entry = wl_array_add(&ru->unsorted, sizeof(*entry));
if (entry) {
*entry = *box;
} else {
handle_alloc_failure(ru);
}
}
@ -81,7 +89,7 @@ const pixman_region32_t *rect_union_evaluate(struct rect_union *ru) {
return &ru->region;
bounding_box:
pixman_region32_fini(&ru->region);
if (box_empty_or_invalid(ru->bounding_box)) {
if (box_empty_or_invalid(&ru->bounding_box)) {
pixman_region32_init(&ru->region);
} else {
pixman_region32_init_with_extents(&ru->region, &ru->bounding_box);

View file

@ -249,7 +249,7 @@ struct x11_data_source {
static const struct wlr_data_source_impl data_source_impl;
bool data_source_is_xwayland(
struct wlr_data_source *wlr_source) {
const struct wlr_data_source *wlr_source) {
return wlr_source->impl == &data_source_impl;
}
@ -292,7 +292,7 @@ static const struct wlr_primary_selection_source_impl
primary_selection_source_impl;
bool primary_selection_source_is_xwayland(
struct wlr_primary_selection_source *wlr_source) {
const struct wlr_primary_selection_source *wlr_source) {
return wlr_source->impl == &primary_selection_source_impl;
}

View file

@ -1157,12 +1157,10 @@ bool wlr_xwayland_surface_fetch_icon(
return false;
}
if (!xcb_ewmh_get_wm_icon_from_reply(icon_reply, reply)) {
free(reply);
return false;
}
bool ok = xcb_ewmh_get_wm_icon_from_reply(icon_reply, reply);
free(reply);
return true;
return ok;
}
static xcb_get_property_cookie_t get_property(struct wlr_xwm *xwm,