Compare commits

..

24 commits

Author SHA1 Message Date
Simon Ser
c1d38536c9 build: bump version to 0.20.0 2026-03-26 17:38:50 +01:00
Simon Zeni
a945fd8940 render/vulkan: compile against vulkan 1.2 header
Uses the EXT version of VK_PIPELINE_COMPILE_REQUIRED in `vulkan_strerror` func since it requires
Vulkan 1.3, switch to VK_EXT_global_priority instead of VK_KHR_global_priority which is only
promoted to core in Vulkan 1.3 as well.

(cherry picked from commit 413664e0b0)
2026-03-26 17:38:25 +01:00
Félix Poisot
2bfbec4af1 linux_drm_syncobj_v1: fix handling of empty first commit
As reported in
https://gitlab.freedesktop.org/wlroots/wlroots/-/merge_requests/4979#note_3385626,
bfd6e619fc did not correctly handle clients
that don't immediately follow their call to
`wp_linux_drm_syncobj_manager_v1.get_surface` with a commit attaching
a buffer

Fixes: bfd6e619fc
(cherry picked from commit fd870f6d27)
2026-03-26 17:38:25 +01:00
Simon Ser
1a9b1292e2 color_management_v1: ignore surface update if no-op
If the new image description is identical to the old one, skip the
event.

(cherry picked from commit 4ca40004fd)
2026-03-26 17:38:25 +01:00
Simon Ser
ad82740518 color_management_v1: use early continue in surface loop
(cherry picked from commit 7287f700ab)
2026-03-26 17:38:25 +01:00
Simon Ser
bba38a0d82 build: bump version to 0.20.0-rc5 2026-03-19 20:02:25 +01:00
Félix Poisot
df1539d9f0 output/drm: don't use OUT_FENCE_PTR
The returned fence is not required to be signalled at the earliest
possible time. It is not intended to replace the drm flip event, and is
expected to be signalled only much later

(cherry picked from commit 8d454e1e34)
2026-03-19 20:01:40 +01:00
Félix Poisot
eff5aa52e6 backend/drm: properly delay syncobj signalling
DRM CRTC signals when scanout begins, but
wlr_output_state_set_signal_timeline() is defined to signal buffer
release. Delay to the next page flip

(cherry picked from commit cd555f9261)
2026-03-19 20:01:40 +01:00
Félix Poisot
8daba3246d scene: transfer sample syncobj to client timeline
(cherry picked from commit b2f6a390a4)
2026-03-19 20:01:40 +01:00
Félix Poisot
52564ea97c linux_drm_syncobj_v1: add release point accumulation
This changes the behavior of wlr_linux_drm_syncobj_surface_v1 to
automatically signal release of previous commits as they are replaced.

Users must call wlr_linux_drm_syncobj_v1_state_add_release_point or
wlr_linux_drm_syncobj_v1_state_signal_release_with_buffer to delay the
signal as appropriate.

(cherry picked from commit bfd6e619fc)
2026-03-19 20:01:40 +01:00
Félix Poisot
288ba9e75b drm/syncobj: add timeline point merger utility
(cherry picked from commit e83a679e23)
2026-03-19 20:01:40 +01:00
Félix Poisot
2f14845ce0 scene: add buffer release point to 'sample' event
(cherry picked from commit 1f3d351abb)
2026-03-19 20:01:23 +01:00
Félix Poisot
c4ff394f7f render/drm_syncobj: add wlr_drm_syncobj_timeline_signal()
(cherry picked from commit 0af9b9d003)
2026-03-19 19:48:36 +01:00
Isaac Freund
88718e84c9 virtual-keyboard: handle seat destroy
We must make the virtual keyboard inert when the seat is destroyed.

(cherry picked from commit 1fa8bb8f7a)
2026-03-19 19:48:14 +01:00
Isaac Freund
6a902237ef virtual-keyboard: add wlr_virtual_keyboard_v1_from_resource()
I want to use the zwp_virtual_keyboard_v1 object in a custom river
protocol and need to be able to obtain the corresponding wlroots struct.

(cherry picked from commit ec746d3e3e)
2026-03-19 19:48:14 +01:00
Scott Moreau
47a43e14ae wlr_ext_image_copy_capture_v1: Fix crash when client creates a cursor session not implemented server side
This guards against a crash where the server implements
wlr_ext_image_capture_source_v1_interface without setting .get_pointer_cursor().
In general, we should install a NULL check here because this is a crash
waiting to happen. Now, instead of crashing, the resource will be created and
the copy capture session will be stopped.

(cherry picked from commit 1fc928d528)
2026-03-19 19:48:09 +01:00
llyyr
ba32abbddb scene: use wl_list_for_each_safe to iterate outputs
The outputs loop in handle_scene_buffer_outputs_update may remove entries
from the list while iterating, so use wl_list_for_each_safe instead of
wl_list_for_each.

Fixes: 39e918edc8 ("scene: avoid redundant wl_surface.enter/leave events")
(cherry picked from commit 3cb2cf9425)
2026-03-19 19:48:05 +01:00
Isaac Freund
9c3bfbeb48 scene: avoid redundant wl_surface.enter/leave events
Currently we send wl_surface.enter/leave when a surface is hidden
and shown again on the same output. In practice, this happens very
often since compositors like river and sway enable and disable
the scene nodes of surfaces as part of their atomic transaction
strategy involving rendering saved buffers while waiting for
clients to submit new buffers of the desired size.

The new strategy documented in the new comments avoids sending
redundant events in this case.

(cherry picked from commit 39e918edc8)
2026-03-19 19:47:55 +01:00
Diego Viola
b727521449 wlr-export-dmabuf-unstable-v1: fix typo
(cherry picked from commit 736c0f3f25)
2026-03-19 19:47:55 +01:00
Jonathan Marler
af228b879a backend/x11: ignore DestroyNotify events
The X11 backend subscribes to StructureNotify events, so when
output_destroy() calls xcb_destroy_window() the server sends a
DestroyNotify back. This is expected and harmless but was logged
as an unhandled event. Silence it the same way MAP_NOTIFY and
UNMAP_NOTIFY are already silenced.

(cherry picked from commit 3c8d199ec1)
2026-03-19 19:47:55 +01:00
Kenny Levinsen
ec26eb250a util/box: Use integer min/max for intersection
wlr_box_intersection only operates on integers, so we shouldn't use
fmin/fmax. Do the usual and add a local integer min/max helper.

(cherry picked from commit 285cee5f3a)
2026-03-19 19:47:55 +01:00
Christopher Snowhill
3105ac2b16 scene: fix color format compare
bool doesn't really support negative values.

Fixes: 7cb3393e7 (scene: send color_management_v1 surface feedback)
(cherry picked from commit 9a931d9ffa)
2026-03-19 19:47:55 +01:00
Simon Zeni
557fde4d01 ci: update dalligi upstream repo
(cherry picked from commit 67ce318b1f)
2026-03-19 19:47:55 +01:00
Andri Yngvason
e444836bf2 image_capture_source/output: Update constraints on enable
Without observing the enable event, clients receive no pixel formats and
buffer dimensions are reported as 0 after an output has been re-enabled.

(cherry picked from commit 3336d28813)
2026-03-19 19:47:55 +01:00
82 changed files with 526 additions and 1878 deletions

View file

@ -35,9 +35,6 @@ tasks:
cd wlroots
ninja -C build
sudo ninja -C build install
- test: |
cd wlroots
meson test -C build --verbose
- build-features-disabled: |
cd wlroots
meson setup build --reconfigure -Dauto_features=disabled

View file

@ -37,10 +37,6 @@ tasks:
- clang: |
cd wlroots/build-clang
ninja
- test: |
cd wlroots/build-gcc
meson test --verbose
meson test --benchmark --verbose
- smoke-test: |
cd wlroots/build-gcc/tinywl
sudo modprobe vkms

View file

@ -32,9 +32,6 @@ tasks:
meson setup build --fatal-meson-warnings -Dauto_features=enabled -Dallocators=gbm
ninja -C build
sudo ninja -C build install
- test: |
cd wlroots
meson test -C build --verbose
- tinywl: |
cd wlroots/tinywl
make

View file

@ -1,4 +1,3 @@
#include <assert.h>
#include <drm_fourcc.h>
#include <stdlib.h>
#include <stdio.h>
@ -476,62 +475,6 @@ static void set_plane_props(struct atomic *atom, struct wlr_drm_backend *drm,
atomic_add(atom, id, props->crtc_h, dst_box->height);
}
static void set_color_encoding_and_range(struct atomic *atom,
struct wlr_drm_backend *drm, struct wlr_drm_plane *plane,
enum wlr_color_encoding encoding, enum wlr_color_range range) {
uint32_t id = plane->id;
const struct wlr_drm_plane_props *props = &plane->props;
uint32_t color_encoding;
switch (encoding) {
case WLR_COLOR_ENCODING_NONE:
case WLR_COLOR_ENCODING_BT601:
color_encoding = WLR_DRM_COLOR_YCBCR_BT601;
break;
case WLR_COLOR_ENCODING_BT709:
color_encoding = WLR_DRM_COLOR_YCBCR_BT709;
break;
case WLR_COLOR_ENCODING_BT2020:
color_encoding = WLR_DRM_COLOR_YCBCR_BT2020;
break;
default:
wlr_log(WLR_DEBUG, "Unsupported color encoding %d", encoding);
atom->failed = true;
return;
}
if (props->color_encoding) {
atomic_add(atom, id, props->color_encoding, color_encoding);
} else {
wlr_log(WLR_DEBUG, "Plane %"PRIu32" is missing the COLOR_ENCODING property",
id);
atom->failed = true;
return;
}
uint32_t color_range;
switch (range) {
case WLR_COLOR_RANGE_NONE:
case WLR_COLOR_RANGE_LIMITED:
color_range = WLR_DRM_COLOR_YCBCR_LIMITED_RANGE;
break;
case WLR_COLOR_RANGE_FULL:
color_range = WLR_DRM_COLOR_YCBCR_FULL_RANGE;
break;
default:
assert(0); // Unreachable
}
if (props->color_range) {
atomic_add(atom, id, props->color_range, color_range);
} else {
wlr_log(WLR_DEBUG, "Plane %"PRIu32" is missing the COLOR_RANGE property",
id);
atom->failed = true;
return;
}
}
static bool supports_cursor_hotspots(const struct wlr_drm_plane *plane) {
return plane->props.hotspot_x && plane->props.hotspot_y;
}
@ -585,10 +528,6 @@ static void atomic_connector_add(struct atomic *atom,
set_plane_props(atom, drm, crtc->primary, state->primary_fb, crtc->id,
&state->primary_viewport.dst_box, &state->primary_viewport.src_box);
if (state->base->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
set_color_encoding_and_range(atom, drm, crtc->primary,
state->base->color_encoding, state->base->color_range);
}
if (crtc->primary->props.fb_damage_clips != 0) {
atomic_add(atom, crtc->primary->id,
crtc->primary->props.fb_damage_clips, state->fb_damage_clips);

View file

@ -95,7 +95,7 @@ static const struct wlr_backend_impl backend_impl = {
.commit = backend_commit,
};
bool wlr_backend_is_drm(const struct wlr_backend *b) {
bool wlr_backend_is_drm(struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -43,8 +43,7 @@ static const uint32_t COMMIT_OUTPUT_STATE =
WLR_OUTPUT_STATE_WAIT_TIMELINE |
WLR_OUTPUT_STATE_SIGNAL_TIMELINE |
WLR_OUTPUT_STATE_COLOR_TRANSFORM |
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION |
WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION;
static const uint32_t SUPPORTED_OUTPUT_STATE =
WLR_OUTPUT_STATE_BACKEND_OPTIONAL | COMMIT_OUTPUT_STATE;
@ -1321,7 +1320,7 @@ static const struct wlr_output_impl output_impl = {
.get_primary_formats = drm_connector_get_primary_formats,
};
bool wlr_output_is_drm(const struct wlr_output *output) {
bool wlr_output_is_drm(struct wlr_output *output) {
return output->impl == &output_impl;
}

View file

@ -144,13 +144,7 @@ static struct wlr_drm_fb *drm_fb_create(struct wlr_drm_backend *drm,
struct wlr_buffer *buf, const struct wlr_drm_format_set *formats) {
struct wlr_dmabuf_attributes attribs;
if (!wlr_buffer_get_dmabuf(buf, &attribs)) {
struct wlr_shm_attributes shm;
if (wlr_buffer_get_shm(buf, &shm)) {
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from shm buffer");
} else {
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from buffer");
}
wlr_log(WLR_DEBUG, "Failed to get DMA-BUF from buffer");
return NULL;
}

View file

@ -50,8 +50,6 @@ static const struct prop_info crtc_info[] = {
static const struct prop_info plane_info[] = {
#define INDEX(name) (offsetof(struct wlr_drm_plane_props, name) / sizeof(uint32_t))
{ "COLOR_ENCODING", INDEX(color_encoding) },
{ "COLOR_RANGE", INDEX(color_range) },
{ "CRTC_H", INDEX(crtc_h) },
{ "CRTC_ID", INDEX(crtc_id) },
{ "CRTC_W", INDEX(crtc_w) },

View file

@ -81,6 +81,6 @@ struct wlr_backend *wlr_headless_backend_create(struct wl_event_loop *loop) {
return &backend->backend;
}
bool wlr_backend_is_headless(const struct wlr_backend *backend) {
bool wlr_backend_is_headless(struct wlr_backend *backend) {
return backend->impl == &backend_impl;
}

View file

@ -106,7 +106,7 @@ static const struct wlr_output_impl output_impl = {
.move_cursor = output_move_cursor,
};
bool wlr_output_is_headless(const struct wlr_output *wlr_output) {
bool wlr_output_is_headless(struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -155,7 +155,7 @@ static const struct wlr_backend_impl backend_impl = {
.destroy = backend_destroy,
};
bool wlr_backend_is_libinput(const struct wlr_backend *b) {
bool wlr_backend_is_libinput(struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -173,7 +173,7 @@ struct wlr_backend *wlr_multi_backend_create(struct wl_event_loop *loop) {
return &backend->backend;
}
bool wlr_backend_is_multi(const struct wlr_backend *b) {
bool wlr_backend_is_multi(struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -577,7 +577,7 @@ static const struct wlr_backend_impl backend_impl = {
.get_drm_fd = backend_get_drm_fd,
};
bool wlr_backend_is_wl(const struct wlr_backend *b) {
bool wlr_backend_is_wl(struct wlr_backend *b) {
return b->impl == &backend_impl;
}

View file

@ -1027,7 +1027,7 @@ static const struct wlr_output_impl output_impl = {
.get_primary_formats = output_get_formats,
};
bool wlr_output_is_wl(const struct wlr_output *wlr_output) {
bool wlr_output_is_wl(struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -219,7 +219,7 @@ static const struct wlr_backend_impl backend_impl = {
.get_drm_fd = backend_get_drm_fd,
};
bool wlr_backend_is_x11(const struct wlr_backend *backend) {
bool wlr_backend_is_x11(struct wlr_backend *backend) {
return backend->impl == &backend_impl;
}

View file

@ -711,7 +711,7 @@ void handle_x11_configure_notify(struct wlr_x11_output *output,
wlr_output_state_finish(&state);
}
bool wlr_output_is_x11(const struct wlr_output *wlr_output) {
bool wlr_output_is_x11(struct wlr_output *wlr_output) {
return wlr_output->impl == &output_impl;
}

View file

@ -65,22 +65,6 @@ struct wlr_drm_plane_props {
uint32_t hotspot_x;
uint32_t hotspot_y;
uint32_t in_fence_fd;
uint32_t color_encoding; // Not guaranteed to exist
uint32_t color_range; // Not guaranteed to exist
};
// Equivalent to wlr_drm_color_encoding defined in the kernel (but not exported)
enum wlr_drm_color_encoding {
WLR_DRM_COLOR_YCBCR_BT601,
WLR_DRM_COLOR_YCBCR_BT709,
WLR_DRM_COLOR_YCBCR_BT2020,
};
// Equivalent to wlr_drm_color_range defined in the kernel (but not exported)
enum wlr_drm_color_range {
WLR_DRM_COLOR_YCBCR_FULL_RANGE,
WLR_DRM_COLOR_YCBCR_LIMITED_RANGE,
};
bool get_drm_connector_props(int fd, uint32_t id,

View file

@ -3,8 +3,6 @@
#include <wayland-server-core.h>
struct wlr_buffer;
/**
* Accumulate timeline points, to have a destination timeline point be
* signalled when all inputs are
@ -43,22 +41,4 @@ bool wlr_drm_syncobj_merger_add(struct wlr_drm_syncobj_merger *merger,
struct wlr_drm_syncobj_timeline *dst_timeline, uint64_t dst_point,
struct wl_event_loop *loop);
/**
* Add a new sync file to wait for.
*
* Ownership of fd is transferred to the merger.
*/
bool wlr_drm_syncobj_merger_add_sync_file(struct wlr_drm_syncobj_merger *merger,
int fd);
/**
* Add a new DMA-BUF release to wait for.
*
* Waits for write access.
* If the platform does not support DMA-BUF<->sync file interop, the supplied
* event_loop is used to schedule a wait.
*/
bool wlr_drm_syncobj_merger_add_dmabuf(struct wlr_drm_syncobj_merger *merger,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop);
#endif

View file

@ -119,8 +119,8 @@ struct wlr_gles2_texture {
GLenum target;
// If this texture is imported from a buffer, the texture does not own
// these states. They cannot be destroyed along with the texture in this
// If this texture is imported from a buffer, the texture is does not own
// these states. These cannot be destroyed along with the texture in this
// case.
GLuint tex;
GLuint fbo;

View file

@ -56,10 +56,6 @@ struct wlr_vk_device {
PFN_vkGetSemaphoreFdKHR vkGetSemaphoreFdKHR;
PFN_vkImportSemaphoreFdKHR vkImportSemaphoreFdKHR;
PFN_vkQueueSubmit2KHR vkQueueSubmit2KHR;
PFN_vkBindImageMemory2KHR vkBindImageMemory2KHR;
PFN_vkCreateSamplerYcbcrConversionKHR vkCreateSamplerYcbcrConversionKHR;
PFN_vkDestroySamplerYcbcrConversionKHR vkDestroySamplerYcbcrConversionKHR;
PFN_vkGetImageMemoryRequirements2KHR vkGetImageMemoryRequirements2KHR;
} api;
uint32_t format_prop_count;
@ -67,9 +63,6 @@ struct wlr_vk_device {
struct wlr_drm_format_set dmabuf_render_formats;
struct wlr_drm_format_set dmabuf_texture_formats;
struct wlr_drm_format_set shm_texture_formats;
float timestamp_period;
uint32_t timestamp_valid_bits;
};
// Tries to find the VkPhysicalDevice for the given drm fd.
@ -284,6 +277,8 @@ struct wlr_vk_command_buffer {
uint64_t timeline_point;
// Textures to destroy after the command buffer completes
struct wl_list destroy_textures; // wlr_vk_texture.destroy_link
// Staging shared buffers to release after the command buffer completes
struct wl_list stage_buffers; // wlr_vk_shared_buffer.link
// Color transform to unref after the command buffer completes
struct wlr_color_transform *color_transform;
@ -350,7 +345,7 @@ struct wlr_vk_renderer {
struct {
struct wlr_vk_command_buffer *cb;
uint64_t last_timeline_point;
struct wl_list buffers; // wlr_vk_stage_buffer.link
struct wl_list buffers; // wlr_vk_shared_buffer.link
} stage;
struct {
@ -411,13 +406,7 @@ VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer);
// Submits the current stage command buffer and waits until it has
// finished execution.
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_file_fd);
struct wlr_vk_render_timer {
struct wlr_render_timer base;
struct wlr_vk_renderer *renderer;
VkQueryPool query_pool;
};
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer);
struct wlr_vk_render_pass_texture {
struct wlr_vk_texture *texture;
@ -443,28 +432,20 @@ struct wlr_vk_render_pass {
struct wlr_drm_syncobj_timeline *signal_timeline;
uint64_t signal_point;
struct wlr_vk_render_timer *timer;
struct wl_array textures; // struct wlr_vk_render_pass_texture
};
struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *renderer,
struct wlr_vk_render_buffer *buffer, const struct wlr_buffer_pass_options *options);
// Suballocates a buffer span with the given size from the staging ring buffer
// that is mapped for CPU access. vulkan_stage_mark_submit must be called after
// allocations are made to mark the timeline point after which the allocations
// will be released. The start of the span will be a multiple of alignment.
// Suballocates a buffer span with the given size that can be mapped
// and used as staging buffer. The allocation is implicitly released when the
// stage cb has finished execution. The start of the span will be a multiple
// of the given alignment.
struct wlr_vk_buffer_span vulkan_get_stage_span(
struct wlr_vk_renderer *renderer, VkDeviceSize size,
VkDeviceSize alignment);
// Records a watermark on all staging buffers with new allocations with the
// specified timeline point. Once the timeline point is passed, the span will
// be reclaimed by vulkan_stage_buffer_reclaim.
void vulkan_stage_mark_submit(struct wlr_vk_renderer *renderer,
uint64_t timeline_point);
// Tries to allocate a texture descriptor set. Will additionally
// return the pool it was allocated from when successful (for freeing it later).
struct wlr_vk_descriptor_pool *vulkan_alloc_texture_ds(
@ -491,8 +472,6 @@ uint64_t vulkan_end_command_buffer(struct wlr_vk_command_buffer *cb,
void vulkan_reset_command_buffer(struct wlr_vk_command_buffer *cb);
bool vulkan_wait_command_buffer(struct wlr_vk_command_buffer *cb,
struct wlr_vk_renderer *renderer);
VkSemaphore vulkan_command_buffer_wait_sync_file(struct wlr_vk_renderer *renderer,
struct wlr_vk_command_buffer *render_cb, size_t sem_index, int sync_file_fd);
bool vulkan_sync_render_pass_release(struct wlr_vk_renderer *renderer,
struct wlr_vk_render_pass *pass);
@ -505,8 +484,7 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VkFormat src_format, VkImage src_image,
uint32_t drm_format, uint32_t stride,
uint32_t width, uint32_t height, uint32_t src_x, uint32_t src_y,
uint32_t dst_x, uint32_t dst_y, void *data,
struct wlr_drm_syncobj_timeline *wait_timeline, uint64_t wait_point);
uint32_t dst_x, uint32_t dst_y, void *data);
// State (e.g. image texture) associated with a surface.
struct wlr_vk_texture {
@ -548,43 +526,29 @@ struct wlr_vk_descriptor_pool {
struct wl_list link; // wlr_vk_renderer.descriptor_pools
};
struct wlr_vk_stage_watermark {
VkDeviceSize head;
uint64_t timeline_point;
struct wlr_vk_allocation {
VkDeviceSize start;
VkDeviceSize size;
};
// Ring buffer for staging transfers
struct wlr_vk_stage_buffer {
struct wl_list link; // wlr_vk_renderer.stage.buffers
// List of suballocated staging buffers.
// Used to upload to/read from device local images.
struct wlr_vk_shared_buffer {
struct wl_list link; // wlr_vk_renderer.stage.buffers or wlr_vk_command_buffer.stage_buffers
VkBuffer buffer;
VkDeviceMemory memory;
VkDeviceSize buf_size;
void *cpu_mapping;
VkDeviceSize head;
VkDeviceSize tail;
struct wl_array watermarks; // struct wlr_vk_stage_watermark
int empty_gc_cnt;
struct wl_array allocs; // struct wlr_vk_allocation
int64_t last_used_ms;
};
// Suballocated range on a staging ring buffer.
// Suballocated range on a buffer.
struct wlr_vk_buffer_span {
struct wlr_vk_stage_buffer *buffer;
VkDeviceSize offset;
VkDeviceSize size;
struct wlr_vk_shared_buffer *buffer;
struct wlr_vk_allocation alloc;
};
// Suballocate a span of size bytes from a staging ring buffer, with the
// returned offset rounded up to the given alignment. Returns the byte offset
// of the allocation, or (VkDeviceSize)-1 if the buffer is too full to fit it.
VkDeviceSize vulkan_stage_buffer_alloc(struct wlr_vk_stage_buffer *buf,
VkDeviceSize size, VkDeviceSize alignment);
// Free all allocations covered by watermarks whose timeline point has been
// reached.
void vulkan_stage_buffer_reclaim(struct wlr_vk_stage_buffer *buf,
uint64_t current_point);
// Prepared form for a color transform
struct wlr_vk_color_transform {

View file

@ -58,15 +58,13 @@ void rect_union_finish(struct rect_union *r);
*
* Amortized time: O(1)
*/
void rect_union_add(struct rect_union *r, const pixman_box32_t *box);
void rect_union_add(struct rect_union *r, pixman_box32_t box);
/**
* Compute an exact cover of the rectangles added so far, and return
* a pointer to a pixman_region32_t giving that cover. The pointer will
* remain valid until the next time *r is modified.
*
* An internal complexity limit is enforced by rect_union. If exceeded, this
* function will instead return a single-rectangle bounding box.
* remain valid until the next time *r is modified. If there was an allocation
* failure, this function may return a single-rectangle bounding box instead.
*
* This may be called multiple times and interleaved with rect_union_add().
*

View file

@ -39,8 +39,8 @@ struct wlr_drm_lease {
struct wlr_backend *wlr_drm_backend_create(struct wlr_session *session,
struct wlr_device *dev, struct wlr_backend *parent);
bool wlr_backend_is_drm(const struct wlr_backend *backend);
bool wlr_output_is_drm(const struct wlr_output *output);
bool wlr_backend_is_drm(struct wlr_backend *backend);
bool wlr_output_is_drm(struct wlr_output *output);
/**
* Get the parent DRM backend, if any.

View file

@ -25,7 +25,7 @@ struct wlr_backend *wlr_headless_backend_create(struct wl_event_loop *loop);
struct wlr_output *wlr_headless_add_output(struct wlr_backend *backend,
unsigned int width, unsigned int height);
bool wlr_backend_is_headless(const struct wlr_backend *backend);
bool wlr_output_is_headless(const struct wlr_output *output);
bool wlr_backend_is_headless(struct wlr_backend *backend);
bool wlr_output_is_headless(struct wlr_output *output);
#endif

View file

@ -29,7 +29,7 @@ struct libinput_device *wlr_libinput_get_device_handle(
struct libinput_tablet_tool *wlr_libinput_get_tablet_tool_handle(
struct wlr_tablet_tool *wlr_tablet_tool);
bool wlr_backend_is_libinput(const struct wlr_backend *backend);
bool wlr_backend_is_libinput(struct wlr_backend *backend);
bool wlr_input_device_is_libinput(struct wlr_input_device *device);
#endif

View file

@ -26,7 +26,7 @@ bool wlr_multi_backend_add(struct wlr_backend *multi,
void wlr_multi_backend_remove(struct wlr_backend *multi,
struct wlr_backend *backend);
bool wlr_backend_is_multi(const struct wlr_backend *backend);
bool wlr_backend_is_multi(struct wlr_backend *backend);
bool wlr_multi_is_empty(struct wlr_backend *backend);
void wlr_multi_for_each_backend(struct wlr_backend *backend,

View file

@ -46,7 +46,7 @@ struct wlr_output *wlr_wl_output_create_from_surface(struct wlr_backend *backend
/**
* Check whether the provided backend is a Wayland backend.
*/
bool wlr_backend_is_wl(const struct wlr_backend *backend);
bool wlr_backend_is_wl(struct wlr_backend *backend);
/**
* Check whether the provided input device is a Wayland input device.
@ -56,7 +56,7 @@ bool wlr_input_device_is_wl(struct wlr_input_device *device);
/**
* Check whether the provided output device is a Wayland output device.
*/
bool wlr_output_is_wl(const struct wlr_output *output);
bool wlr_output_is_wl(struct wlr_output *output);
/**
* Sets the title of a struct wlr_output which is a Wayland toplevel.

View file

@ -31,7 +31,7 @@ struct wlr_output *wlr_x11_output_create(struct wlr_backend *backend);
/**
* Check whether this backend is an X11 backend.
*/
bool wlr_backend_is_x11(const struct wlr_backend *backend);
bool wlr_backend_is_x11(struct wlr_backend *backend);
/**
* Check whether this input device is an X11 input device.
@ -41,7 +41,7 @@ bool wlr_input_device_is_x11(struct wlr_input_device *device);
/**
* Check whether this output device is an X11 output device.
*/
bool wlr_output_is_x11(const struct wlr_output *output);
bool wlr_output_is_x11(struct wlr_output *output);
/**
* Sets the title of a struct wlr_output which is an X11 window.

View file

@ -97,11 +97,7 @@ bool wlr_drm_syncobj_timeline_signal(struct wlr_drm_syncobj_timeline *timeline,
/**
* Asynchronously wait for a timeline point.
*
* Flags can be:
*
* - 0 to wait for the point to be signalled
* - DRM_SYNCOBJ_WAIT_FLAGS_WAIT_AVAILABLE to only wait for a fence to
* materialize
* See wlr_drm_syncobj_timeline_check() for a definition of flags.
*
* A callback must be provided that will be invoked when the waiter has finished.
*/

View file

@ -42,9 +42,9 @@ struct wlr_gles2_texture_attribs {
bool has_alpha;
};
bool wlr_renderer_is_gles2(const struct wlr_renderer *wlr_renderer);
bool wlr_render_timer_is_gles2(const struct wlr_render_timer *timer);
bool wlr_texture_is_gles2(const struct wlr_texture *texture);
bool wlr_renderer_is_gles2(struct wlr_renderer *wlr_renderer);
bool wlr_render_timer_is_gles2(struct wlr_render_timer *timer);
bool wlr_texture_is_gles2(struct wlr_texture *texture);
void wlr_gles2_texture_get_attribs(struct wlr_texture *texture,
struct wlr_gles2_texture_attribs *attribs);

View file

@ -14,8 +14,8 @@
struct wlr_renderer *wlr_pixman_renderer_create(void);
bool wlr_renderer_is_pixman(const struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_pixman(const struct wlr_texture *texture);
bool wlr_renderer_is_pixman(struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_pixman(struct wlr_texture *texture);
pixman_image_t *wlr_pixman_renderer_get_buffer_image(
struct wlr_renderer *wlr_renderer, struct wlr_buffer *wlr_buffer);

View file

@ -25,8 +25,8 @@ VkPhysicalDevice wlr_vk_renderer_get_physical_device(struct wlr_renderer *render
VkDevice wlr_vk_renderer_get_device(struct wlr_renderer *renderer);
uint32_t wlr_vk_renderer_get_queue_family(struct wlr_renderer *renderer);
bool wlr_renderer_is_vk(const struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_vk(const struct wlr_texture *texture);
bool wlr_renderer_is_vk(struct wlr_renderer *wlr_renderer);
bool wlr_texture_is_vk(struct wlr_texture *texture);
void wlr_vk_texture_get_image_attribs(struct wlr_texture *texture,
struct wlr_vk_image_attribs *attribs);

View file

@ -37,8 +37,6 @@ struct wlr_texture_read_pixels_options {
uint32_t dst_x, dst_y;
/** Source box of the texture to read from. If empty, the full texture is assumed. */
const struct wlr_box src_box;
struct wlr_drm_syncobj_timeline *wait_timeline;
uint64_t wait_point;
};
bool wlr_texture_read_pixels(struct wlr_texture *texture,

View file

@ -1,50 +0,0 @@
/*
* This an unstable interface of wlroots. No guarantees are made regarding the
* future consistency of this API.
*/
#ifndef WLR_USE_UNSTABLE
#error "Add -DWLR_USE_UNSTABLE to enable unstable wlroots features"
#endif
#ifndef WLR_TYPES_WLR_EXT_BACKGROUND_EFFECT_V1_H
#define WLR_TYPES_WLR_EXT_BACKGROUND_EFFECT_V1_H
#include <pixman.h>
#include <wayland-server-core.h>
#include <wayland-protocols/ext-background-effect-v1-enum.h>
struct wlr_surface;
struct wlr_ext_background_effect_surface_v1_state {
pixman_region32_t blur_region;
};
struct wlr_ext_background_effect_manager_v1 {
struct wl_global *global;
uint32_t capabilities; // bitmask of enum ext_background_effect_manager_v1_capability
struct {
struct wl_signal destroy;
} events;
void *data;
struct {
struct wl_list resources; // wl_resource_get_link()
struct wl_listener display_destroy;
} WLR_PRIVATE;
};
struct wlr_ext_background_effect_manager_v1 *wlr_ext_background_effect_manager_v1_create(
struct wl_display *display, uint32_t version, uint32_t capabilities);
/**
* Get the committed background effect state for a surface.
*
* Returns NULL if the client has not attached a background effect object to
* the surface.
*/
const struct wlr_ext_background_effect_surface_v1_state *
wlr_ext_background_effect_v1_get_surface_state(struct wlr_surface *surface);
#endif

View file

@ -91,7 +91,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 {
struct {
struct wl_signal destroy;
struct wl_signal capture_request; // struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event
struct wl_signal new_request; // struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request
} events;
struct {
@ -99,7 +99,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 {
} WLR_PRIVATE;
};
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event {
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request {
struct wlr_ext_foreign_toplevel_handle_v1 *toplevel_handle;
struct wl_client *client;
@ -124,7 +124,7 @@ struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1 *
wlr_ext_foreign_toplevel_image_capture_source_manager_v1_create(struct wl_display *display, uint32_t version);
bool wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_accept(
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event *request,
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request,
struct wlr_ext_image_capture_source_v1 *source);
struct wlr_ext_image_capture_source_v1 *wlr_ext_image_capture_source_v1_create_with_scene_node(

View file

@ -74,21 +74,4 @@ bool wlr_linux_drm_syncobj_v1_state_add_release_point(
struct wlr_drm_syncobj_timeline *release_timeline, uint64_t release_point,
struct wl_event_loop *event_loop);
/**
* Register the DMA-BUF release of a buffer for buffer usage.
* Non-dmabuf buffers are considered to be immediately available (no wait).
*
* This function may be called multiple times for the same commit. The client's
* release point will be signalled when all registered points are signalled, and
* a new buffer has been committed.
*
* Because the platform may not support DMA-BUF fence merges, a wl_event_loop
* must be supplied to schedule a wait internally, if needed
*
* Waits for write access
*/
bool wlr_linux_drm_syncobj_v1_state_add_release_from_implicit_sync(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop);
#endif

View file

@ -77,7 +77,6 @@ enum wlr_output_state_field {
WLR_OUTPUT_STATE_SIGNAL_TIMELINE = 1 << 11,
WLR_OUTPUT_STATE_COLOR_TRANSFORM = 1 << 12,
WLR_OUTPUT_STATE_IMAGE_DESCRIPTION = 1 << 13,
WLR_OUTPUT_STATE_COLOR_REPRESENTATION = 1 << 14,
};
enum wlr_output_state_mode_type {
@ -143,10 +142,6 @@ struct wlr_output_state {
* regular page-flip at the next wlr_output.frame event. */
bool tearing_page_flip;
// Set if (committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION)
enum wlr_color_encoding color_encoding;
enum wlr_color_range color_range;
enum wlr_output_state_mode_type mode_type;
struct wlr_output_mode *mode;
struct {
@ -210,8 +205,6 @@ struct wlr_output {
enum wl_output_transform transform;
enum wlr_output_adaptive_sync_status adaptive_sync_status;
uint32_t render_format;
enum wlr_color_encoding color_encoding;
enum wlr_color_range color_range;
const struct wlr_output_image_description *image_description;
// Indicates whether making changes to adaptive sync status is supported.
@ -632,15 +625,6 @@ void wlr_output_state_set_color_transform(struct wlr_output_state *state,
bool wlr_output_state_set_image_description(struct wlr_output_state *state,
const struct wlr_output_image_description *image_desc);
/**
* Set the color encoding and range of the primary scanout buffer.
*
* Pass WLR_COLOR_ENCODING_NONE / WLR_COLOR_RANGE_NONE to reset to defaults.
*/
void wlr_output_state_set_color_encoding_and_range(
struct wlr_output_state *state,
enum wlr_color_encoding encoding, enum wlr_color_range range);
/**
* Copies the output state from src to dst. It is safe to then
* wlr_output_state_finish() src and have dst still be valid.

View file

@ -171,6 +171,8 @@ struct wlr_scene_buffer {
struct {
struct wl_signal outputs_update; // struct wlr_scene_outputs_update_event
struct wl_signal output_enter; // struct wlr_scene_output
struct wl_signal output_leave; // struct wlr_scene_output
struct wl_signal output_sample; // struct wlr_scene_output_sample_event
struct wl_signal frame_done; // struct wlr_scene_frame_done_event
} events;

View file

@ -107,13 +107,6 @@ void wlr_fbox_transform(struct wlr_fbox *dest, const struct wlr_fbox *box,
#ifdef WLR_USE_UNSTABLE
/**
* Checks whether two boxes intersect.
*
* Returns false if either box is empty.
*/
bool wlr_box_intersects(const struct wlr_box *a, const struct wlr_box *b);
/**
* Returns true if the two boxes are equal, false otherwise.
*/

View file

@ -82,9 +82,9 @@ void xwm_handle_selection_notify(struct wlr_xwm *xwm,
xcb_selection_notify_event_t *event);
int xwm_handle_xfixes_selection_notify(struct wlr_xwm *xwm,
xcb_xfixes_selection_notify_event_t *event);
bool data_source_is_xwayland(const struct wlr_data_source *wlr_source);
bool data_source_is_xwayland(struct wlr_data_source *wlr_source);
bool primary_selection_source_is_xwayland(
const struct wlr_primary_selection_source *wlr_source);
struct wlr_primary_selection_source *wlr_source);
void xwm_seat_handle_start_drag(struct wlr_xwm *xwm, struct wlr_drag *drag);

View file

@ -1,7 +1,7 @@
project(
'wlroots',
'c',
version: '0.21.0-dev',
version: '0.20.0',
license: 'MIT',
meson_version: '>=1.3',
default_options: [
@ -178,10 +178,6 @@ if get_option('examples')
subdir('tinywl')
endif
if get_option('tests')
subdir('test')
endif
pkgconfig = import('pkgconfig')
pkgconfig.generate(
lib_wlr,

View file

@ -7,6 +7,5 @@ option('backends', type: 'array', choices: ['auto', 'drm', 'libinput', 'x11'], v
option('allocators', type: 'array', choices: ['auto', 'gbm', 'udmabuf'], value: ['auto'],
description: 'Select built-in allocators')
option('session', type: 'feature', value: 'auto', description: 'Enable session support')
option('tests', type: 'boolean', value: true, description: 'Build tests and benchmarks')
option('color-management', type: 'feature', value: 'auto', description: 'Enable support for color management')
option('libliftoff', type: 'feature', value: 'auto', description: 'Enable support for libliftoff')

View file

@ -30,7 +30,6 @@ protocols = {
'content-type-v1': wl_protocol_dir / 'staging/content-type/content-type-v1.xml',
'cursor-shape-v1': wl_protocol_dir / 'staging/cursor-shape/cursor-shape-v1.xml',
'drm-lease-v1': wl_protocol_dir / 'staging/drm-lease/drm-lease-v1.xml',
'ext-background-effect-v1': wl_protocol_dir / 'staging/ext-background-effect/ext-background-effect-v1.xml',
'ext-foreign-toplevel-list-v1': wl_protocol_dir / 'staging/ext-foreign-toplevel-list/ext-foreign-toplevel-list-v1.xml',
'ext-idle-notify-v1': wl_protocol_dir / 'staging/ext-idle-notify/ext-idle-notify-v1.xml',
'ext-image-capture-source-v1': wl_protocol_dir / 'staging/ext-image-capture-source/ext-image-capture-source-v1.xml',

View file

@ -56,8 +56,7 @@ bool dmabuf_import_sync_file(int dmabuf_fd, uint32_t flags, int sync_file_fd) {
.fd = sync_file_fd,
};
if (drmIoctl(dmabuf_fd, DMA_BUF_IOCTL_IMPORT_SYNC_FILE, &data) != 0) {
enum wlr_log_importance importance = errno == ENOTTY ? WLR_DEBUG : WLR_ERROR;
wlr_log_errno(importance, "drmIoctl(IMPORT_SYNC_FILE) failed");
wlr_log_errno(WLR_ERROR, "drmIoctl(IMPORT_SYNC_FILE) failed");
return false;
}
return true;
@ -69,8 +68,7 @@ int dmabuf_export_sync_file(int dmabuf_fd, uint32_t flags) {
.fd = -1,
};
if (drmIoctl(dmabuf_fd, DMA_BUF_IOCTL_EXPORT_SYNC_FILE, &data) != 0) {
enum wlr_log_importance importance = errno == ENOTTY ? WLR_DEBUG : WLR_ERROR;
wlr_log_errno(importance, "drmIoctl(EXPORT_SYNC_FILE) failed");
wlr_log_errno(WLR_ERROR, "drmIoctl(EXPORT_SYNC_FILE) failed");
return -1;
}
return data.fd;

View file

@ -222,8 +222,14 @@ bool wlr_drm_syncobj_timeline_waiter_init(struct wlr_drm_syncobj_timeline_waiter
return false;
}
if (drmSyncobjEventfd(timeline->drm_fd, timeline->handle, point, ev_fd, flags) != 0) {
wlr_log_errno(WLR_ERROR, "drmSyncobjEventfd() failed");
struct drm_syncobj_eventfd syncobj_eventfd = {
.handle = timeline->handle,
.flags = flags,
.point = point,
.fd = ev_fd,
};
if (drmIoctl(timeline->drm_fd, DRM_IOCTL_SYNCOBJ_EVENTFD, &syncobj_eventfd) != 0) {
wlr_log_errno(WLR_ERROR, "DRM_IOCTL_SYNCOBJ_EVENTFD failed");
close(ev_fd);
return false;
}

View file

@ -4,10 +4,8 @@
#include <unistd.h>
#include <wayland-util.h>
#include <wlr/render/drm_syncobj.h>
#include <wlr/types/wlr_buffer.h>
#include <wlr/util/log.h>
#include <xf86drm.h>
#include "render/dmabuf.h"
#include "render/drm_syncobj_merger.h"
#include "config.h"
@ -81,7 +79,14 @@ void wlr_drm_syncobj_merger_unref(struct wlr_drm_syncobj_merger *merger) {
static bool merger_add_exportable(struct wlr_drm_syncobj_merger *merger,
struct wlr_drm_syncobj_timeline *src_timeline, uint64_t src_point) {
int new_sync = wlr_drm_syncobj_timeline_export_sync_file(src_timeline, src_point);
return wlr_drm_syncobj_merger_add_sync_file(merger, new_sync);
if (merger->sync_fd != -1) {
int fd2 = new_sync;
new_sync = sync_file_merge(merger->sync_fd, fd2);
close(fd2);
close(merger->sync_fd);
}
merger->sync_fd = new_sync;
return true;
}
struct export_waiter {
@ -126,69 +131,3 @@ bool wlr_drm_syncobj_merger_add(struct wlr_drm_syncobj_merger *merger,
merger->n_ref++;
return true;
}
bool wlr_drm_syncobj_merger_add_sync_file(struct wlr_drm_syncobj_merger *merger,
int fd) {
int new_sync = fd;
if (merger->sync_fd != -1) {
new_sync = sync_file_merge(merger->sync_fd, fd);
close(fd);
close(merger->sync_fd);
}
merger->sync_fd = new_sync;
return merger->sync_fd != -1;
}
struct poll_waiter {
struct wl_event_source *event_source;
struct wlr_drm_syncobj_merger *merger;
};
static int poll_waiter_handle_done(int fd, uint32_t mask, void *data) {
struct poll_waiter *waiter = data;
wlr_drm_syncobj_merger_unref(waiter->merger);
wl_event_source_remove(waiter->event_source);
free(waiter);
return 0;
}
bool wlr_drm_syncobj_merger_add_dmabuf(struct wlr_drm_syncobj_merger *merger,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop) {
struct wlr_dmabuf_attributes dmabuf_attributes;
if (!wlr_buffer_get_dmabuf(buffer, &dmabuf_attributes)) {
return true;
}
bool res = true;
for (int i = 0; i < dmabuf_attributes.n_planes; ++i) {
int sync_fd = dmabuf_export_sync_file(dmabuf_attributes.fd[i], DMA_BUF_SYNC_WRITE);
if (sync_fd == -1) {
res = false;
break;
}
if (!wlr_drm_syncobj_merger_add_sync_file(merger, sync_fd)) {
return false;
}
}
if (res) {
return true;
}
uint32_t mask = WL_EVENT_ERROR | WL_EVENT_HANGUP | WL_EVENT_WRITABLE;
for (int i = 0; i < dmabuf_attributes.n_planes; ++i) {
struct poll_waiter *waiter = calloc(1, sizeof(*waiter));
if (waiter == NULL) {
return false;
}
waiter->merger = wlr_drm_syncobj_merger_ref(merger);
waiter->event_source = wl_event_loop_add_fd(event_loop,
dmabuf_attributes.fd[i], mask, poll_waiter_handle_done, waiter);
if (waiter->event_source == NULL) {
wlr_drm_syncobj_merger_unref(waiter->merger);
free(waiter);
return false;
}
}
return true;
}

View file

@ -29,7 +29,7 @@
static const struct wlr_renderer_impl renderer_impl;
static const struct wlr_render_timer_impl render_timer_impl;
bool wlr_renderer_is_gles2(const struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_gles2(struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -40,7 +40,7 @@ struct wlr_gles2_renderer *gles2_get_renderer(
return renderer;
}
bool wlr_render_timer_is_gles2(const struct wlr_render_timer *timer) {
bool wlr_render_timer_is_gles2(struct wlr_render_timer *timer) {
return timer->impl == &render_timer_impl;
}

View file

@ -4,10 +4,8 @@
#include <GLES2/gl2ext.h>
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
#include <wayland-server-protocol.h>
#include <wayland-util.h>
#include <wlr/render/drm_syncobj.h>
#include <wlr/render/egl.h>
#include <wlr/render/interface.h>
#include <wlr/render/wlr_texture.h>
@ -18,7 +16,7 @@
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_gles2(const struct wlr_texture *wlr_texture) {
bool wlr_texture_is_gles2(struct wlr_texture *wlr_texture) {
return wlr_texture->impl == &texture_impl;
}
@ -203,27 +201,6 @@ static bool gles2_texture_read_pixels(struct wlr_texture *wlr_texture,
return false;
}
if (options->wait_timeline != NULL) {
int sync_file_fd =
wlr_drm_syncobj_timeline_export_sync_file(options->wait_timeline, options->wait_point);
if (sync_file_fd < 0) {
return false;
}
struct wlr_gles2_renderer *renderer = texture->renderer;
EGLSyncKHR sync = wlr_egl_create_sync(renderer->egl, sync_file_fd);
close(sync_file_fd);
if (sync == EGL_NO_SYNC_KHR) {
return false;
}
bool ok = wlr_egl_wait_sync(renderer->egl, sync);
wlr_egl_destroy_sync(renderer->egl, sync);
if (!ok) {
return false;
}
}
// Make sure any pending drawing is finished before we try to read it
glFinish();

View file

@ -159,7 +159,6 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
switch (options->filter_mode) {
case WLR_SCALE_FILTER_BILINEAR:
pixman_image_set_repeat(texture->image, PIXMAN_REPEAT_PAD);
pixman_image_set_filter(texture->image, PIXMAN_FILTER_BILINEAR, NULL, 0);
break;
case WLR_SCALE_FILTER_NEAREST:

View file

@ -12,7 +12,7 @@
static const struct wlr_renderer_impl renderer_impl;
bool wlr_renderer_is_pixman(const struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_pixman(struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -69,7 +69,7 @@ static struct wlr_pixman_buffer *get_buffer(
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_pixman(const struct wlr_texture *texture) {
bool wlr_texture_is_pixman(struct wlr_texture *texture) {
return texture->impl == &texture_impl;
}

View file

@ -2,9 +2,7 @@
#include <drm_fourcc.h>
#include <stdlib.h>
#include <unistd.h>
#include <wlr/util/box.h>
#include <wlr/util/log.h>
#include <wlr/util/transform.h>
#include <wlr/render/color.h>
#include <wlr/render/drm_syncobj.h>
@ -40,6 +38,17 @@ static void bind_pipeline(struct wlr_vk_render_pass *pass, VkPipeline pipeline)
pass->bound_pipeline = pipeline;
}
static void get_clip_region(struct wlr_vk_render_pass *pass,
const pixman_region32_t *in, pixman_region32_t *out) {
if (in != NULL) {
pixman_region32_init(out);
pixman_region32_copy(out, in);
} else {
struct wlr_buffer *buffer = pass->render_buffer->wlr_buffer;
pixman_region32_init_rect(out, 0, 0, buffer->width, buffer->height);
}
}
static void convert_pixman_box_to_vk_rect(const pixman_box32_t *box, VkRect2D *rect) {
*rect = (VkRect2D){
.offset = { .x = box->x1, .y = box->y1 },
@ -90,6 +99,53 @@ static void render_pass_destroy(struct wlr_vk_render_pass *pass) {
free(pass);
}
static VkSemaphore render_pass_wait_sync_file(struct wlr_vk_render_pass *pass,
size_t sem_index, int sync_file_fd) {
struct wlr_vk_renderer *renderer = pass->renderer;
struct wlr_vk_command_buffer *render_cb = pass->command_buffer;
VkResult res;
VkSemaphore *wait_semaphores = render_cb->wait_semaphores.data;
size_t wait_semaphores_len = render_cb->wait_semaphores.size / sizeof(wait_semaphores[0]);
VkSemaphore *sem_ptr;
if (sem_index >= wait_semaphores_len) {
sem_ptr = wl_array_add(&render_cb->wait_semaphores, sizeof(*sem_ptr));
if (sem_ptr == NULL) {
return VK_NULL_HANDLE;
}
*sem_ptr = VK_NULL_HANDLE;
} else {
sem_ptr = &wait_semaphores[sem_index];
}
if (*sem_ptr == VK_NULL_HANDLE) {
VkSemaphoreCreateInfo semaphore_info = {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
};
res = vkCreateSemaphore(renderer->dev->dev, &semaphore_info, NULL, sem_ptr);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSemaphore", res);
return VK_NULL_HANDLE;
}
}
VkImportSemaphoreFdInfoKHR import_info = {
.sType = VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_FD_INFO_KHR,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
.flags = VK_SEMAPHORE_IMPORT_TEMPORARY_BIT,
.semaphore = *sem_ptr,
.fd = sync_file_fd,
};
res = renderer->dev->api.vkImportSemaphoreFdKHR(renderer->dev->dev, &import_info);
if (res != VK_SUCCESS) {
wlr_vk_error("vkImportSemaphoreFdKHR", res);
return VK_NULL_HANDLE;
}
return *sem_ptr;
}
static bool render_pass_wait_render_buffer(struct wlr_vk_render_pass *pass,
VkSemaphoreSubmitInfoKHR *render_wait, uint32_t *render_wait_len_ptr) {
int sync_file_fds[WLR_DMABUF_MAX_PLANES];
@ -106,8 +162,7 @@ static bool render_pass_wait_render_buffer(struct wlr_vk_render_pass *pass,
continue;
}
VkSemaphore sem = vulkan_command_buffer_wait_sync_file(pass->renderer,
pass->command_buffer, *render_wait_len_ptr, sync_file_fds[i]);
VkSemaphore sem = render_pass_wait_sync_file(pass, *render_wait_len_ptr, sync_file_fds[i]);
if (sem == VK_NULL_HANDLE) {
close(sync_file_fds[i]);
continue;
@ -193,13 +248,11 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
int width = pass->render_buffer->wlr_buffer->width;
int height = pass->render_buffer->wlr_buffer->height;
struct wlr_box output_box = { 0, 0, width, height };
float proj[9], final_matrix[9];
wlr_matrix_identity(proj);
wlr_matrix_project_box(final_matrix, &output_box,
WL_OUTPUT_TRANSFORM_NORMAL, proj);
wlr_matrix_multiply(final_matrix, pass->projection, final_matrix);
float final_matrix[9] = {
width, 0, -1,
0, height, -1,
0, 0, 0,
};
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = { 0, 0 },
.uv_size = { 1, 1 },
@ -278,38 +331,16 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(
clip, &clip_rects_len);
if (clip_rects_len > 0) {
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pass->failed = true;
goto error;
}
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *b = &clip_rects[i];
instance_data[i * 4 + 0] = (float)b->x1 / width;
instance_data[i * 4 + 1] = (float)b->y1 / height;
instance_data[i * 4 + 2] = (float)(b->x2 - b->x1) / width;
instance_data[i * 4 + 3] = (float)(b->y2 - b->y1) / height;
}
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(render_cb->vk, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(render_cb->vk, 4, clip_rects_len, 0, 0);
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(render_cb->vk, 0, 1, &rect);
vkCmdDraw(render_cb->vk, 4, 1, 0, 0);
}
}
vkCmdEndRenderPass(render_cb->vk);
if (pass->timer != NULL) {
vkCmdWriteTimestamp(render_cb->vk, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
pass->timer->query_pool, 1);
}
size_t pass_textures_len = pass->textures.size / sizeof(struct wlr_vk_render_pass_texture);
size_t render_wait_cap = (1 + pass_textures_len) * WLR_DMABUF_MAX_PLANES;
render_wait = calloc(render_wait_cap, sizeof(*render_wait));
@ -400,8 +431,7 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
continue;
}
VkSemaphore sem = vulkan_command_buffer_wait_sync_file(renderer, render_cb,
render_wait_len, sync_file_fds[i]);
VkSemaphore sem = render_pass_wait_sync_file(pass, render_wait_len, sync_file_fds[i]);
if (sem == VK_NULL_HANDLE) {
close(sync_file_fds[i]);
continue;
@ -605,7 +635,14 @@ static bool render_pass_submit(struct wlr_render_pass *wlr_pass) {
free(render_wait);
vulkan_stage_mark_submit(renderer, render_timeline_point);
struct wlr_vk_shared_buffer *stage_buf, *stage_buf_tmp;
wl_list_for_each_safe(stage_buf, stage_buf_tmp, &renderer->stage.buffers, link) {
if (stage_buf->allocs.size == 0) {
continue;
}
wl_list_remove(&stage_buf->link);
wl_list_insert(&stage_cb->stage_buffers, &stage_buf->link);
}
if (!vulkan_sync_render_pass_release(renderer, pass)) {
wlr_log(WLR_ERROR, "Failed to sync render buffer");
@ -630,11 +667,18 @@ error:
}
static void render_pass_mark_box_updated(struct wlr_vk_render_pass *pass,
const pixman_box32_t *box) {
const struct wlr_box *box) {
if (!pass->two_pass) {
return;
}
rect_union_add(&pass->updated_region, box);
pixman_box32_t pixman_box = {
.x1 = box->x,
.x2 = box->x + box->width,
.y1 = box->y,
.y2 = box->y + box->height,
};
rect_union_add(&pass->updated_region, pixman_box);
}
static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
@ -654,26 +698,29 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
options->color.a, // no conversion for alpha
};
struct wlr_box box;
wlr_render_rect_options_get_box(options, pass->render_buffer->wlr_buffer, &box);
pixman_region32_t clip;
if (options->clip) {
pixman_region32_init(&clip);
pixman_region32_intersect_rect(&clip, options->clip,
box.x, box.y, box.width, box.height);
} else {
pixman_region32_init_rect(&clip,
box.x, box.y, box.width, box.height);
}
get_clip_region(pass, options->clip, &clip);
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
if (clip_rects_len == 0) {
pixman_region32_fini(&clip);
return;
// Record regions possibly updated for use in second subpass
for (int i = 0; i < clip_rects_len; i++) {
struct wlr_box clip_box = {
.x = clip_rects[i].x1,
.y = clip_rects[i].y1,
.width = clip_rects[i].x2 - clip_rects[i].x1,
.height = clip_rects[i].y2 - clip_rects[i].y1,
};
struct wlr_box intersection;
if (!wlr_box_intersection(&intersection, &options->box, &clip_box)) {
continue;
}
render_pass_mark_box_updated(pass, &intersection);
}
struct wlr_box box;
wlr_render_rect_options_get_box(options, pass->render_buffer->wlr_buffer, &box);
switch (options->blend_mode) {
case WLR_RENDER_BLEND_MODE_PREMULTIPLIED:;
float proj[9], matrix[9];
@ -692,23 +739,6 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
break;
}
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(pass->renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pass->failed = true;
break;
}
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
instance_data[i * 4 + 0] = (float)(rect->x1 - box.x) / box.width;
instance_data[i * 4 + 1] = (float)(rect->y1 - box.y) / box.height;
instance_data[i * 4 + 2] = (float)(rect->x2 - rect->x1) / box.width;
instance_data[i * 4 + 3] = (float)(rect->y2 - rect->y1) / box.height;
}
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = { 0, 0 },
.uv_size = { 1, 1 },
@ -722,9 +752,12 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
VK_SHADER_STAGE_FRAGMENT_BIT, sizeof(vert_pcr_data), sizeof(float) * 4,
linear_color);
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(cb, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(cb, 4, clip_rects_len, 0, 0);
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(cb, 0, 1, &rect);
vkCmdDraw(cb, 4, 1, 0, 0);
}
break;
case WLR_RENDER_BLEND_MODE_NONE:;
VkClearAttachment clear_att = {
@ -741,9 +774,7 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
.layerCount = 1,
};
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
convert_pixman_box_to_vk_rect(rect, &clear_rect.rect);
convert_pixman_box_to_vk_rect(&clip_rects[i], &clear_rect.rect);
vkCmdClearAttachments(cb, 1, &clear_att, 1, &clear_rect);
}
break;
@ -785,31 +816,6 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
wlr_matrix_project_box(matrix, &dst_box, options->transform, proj);
wlr_matrix_multiply(matrix, pass->projection, matrix);
pixman_region32_t clip;
if (options->clip) {
pixman_region32_init(&clip);
pixman_region32_intersect_rect(&clip, options->clip,
dst_box.x, dst_box.y, dst_box.width, dst_box.height);
} else {
pixman_region32_init_rect(&clip,
dst_box.x, dst_box.y, dst_box.width, dst_box.height);
}
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
if (clip_rects_len == 0) {
pixman_region32_fini(&clip);
return;
}
const VkDeviceSize instance_size = 4 * sizeof(float);
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
clip_rects_len * instance_size, 16);
if (!span.buffer) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
struct wlr_vk_vert_pcr_data vert_pcr_data = {
.uv_off = {
src_box.x / options->texture->width,
@ -880,7 +886,6 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
WLR_RENDER_BLEND_MODE_NONE : options->blend_mode,
});
if (!pipe) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
@ -888,7 +893,6 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
struct wlr_vk_texture_view *view =
vulkan_texture_get_or_create_view(texture, pipe->layout, srgb_image_view);
if (!view) {
pixman_region32_fini(&clip);
pass->failed = true;
return;
}
@ -926,35 +930,34 @@ static void render_pass_add_texture(struct wlr_render_pass *wlr_pass,
VK_SHADER_STAGE_FRAGMENT_BIT, sizeof(vert_pcr_data),
sizeof(frag_pcr_data), &frag_pcr_data);
float *instance_data = (float *)((char *)span.buffer->cpu_mapping + span.offset);
pixman_region32_t clip;
get_clip_region(pass, options->clip, &clip);
int clip_rects_len;
const pixman_box32_t *clip_rects = pixman_region32_rectangles(&clip, &clip_rects_len);
for (int i = 0; i < clip_rects_len; i++) {
const pixman_box32_t *rect = &clip_rects[i];
render_pass_mark_box_updated(pass, rect);
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(cb, 0, 1, &rect);
vkCmdDraw(cb, 4, 1, 0, 0);
struct wlr_fbox norm = {
.x = (double)(rect->x1 - dst_box.x) / dst_box.width,
.y = (double)(rect->y1 - dst_box.y) / dst_box.height,
.width = (double)(rect->x2 - rect->x1) / dst_box.width,
.height = (double)(rect->y2 - rect->y1) / dst_box.height,
struct wlr_box clip_box = {
.x = clip_rects[i].x1,
.y = clip_rects[i].y1,
.width = clip_rects[i].x2 - clip_rects[i].x1,
.height = clip_rects[i].y2 - clip_rects[i].y1,
};
if (options->transform != WL_OUTPUT_TRANSFORM_NORMAL) {
wlr_fbox_transform(&norm, &norm, options->transform, 1.0, 1.0);
struct wlr_box intersection;
if (!wlr_box_intersection(&intersection, &dst_box, &clip_box)) {
continue;
}
instance_data[i * 4 + 0] = (float)norm.x;
instance_data[i * 4 + 1] = (float)norm.y;
instance_data[i * 4 + 2] = (float)norm.width;
instance_data[i * 4 + 3] = (float)norm.height;
render_pass_mark_box_updated(pass, &intersection);
}
pixman_region32_fini(&clip);
VkDeviceSize vb_offset = span.offset;
vkCmdBindVertexBuffers(cb, 0, 1, &span.buffer->buffer, &vb_offset);
vkCmdDraw(cb, 4, clip_rects_len, 0, 0);
texture->last_used_cb = pass->command_buffer;
pixman_region32_fini(&clip);
if (texture->dmabuf_imported || (options != NULL && options->wait_timeline != NULL)) {
struct wlr_vk_render_pass_texture *pass_texture =
wl_array_add(&pass->textures, sizeof(*pass_texture));
@ -1036,7 +1039,7 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
res = vkCreateImage(dev, &img_info, NULL, image);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateImage failed", res);
return false;
return NULL;
}
VkMemoryRequirements mem_reqs = {0};
@ -1093,13 +1096,13 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
size_t size = dim_len * dim_len * dim_len * bytes_per_block;
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer,
size, bytes_per_block);
if (!span.buffer || span.size != size) {
if (!span.buffer || span.alloc.size != size) {
wlr_log(WLR_ERROR, "Failed to retrieve staging buffer");
goto fail_imageview;
}
float sample_range = 1.0f / (dim_len - 1);
char *map = (char *)span.buffer->cpu_mapping + span.offset;
char *map = (char *)span.buffer->cpu_mapping + span.alloc.start;
float *dst = (float *)map;
for (size_t b_index = 0; b_index < dim_len; b_index++) {
for (size_t g_index = 0; g_index < dim_len; g_index++) {
@ -1129,7 +1132,7 @@ static bool create_3d_lut_image(struct wlr_vk_renderer *renderer,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_ACCESS_TRANSFER_WRITE_BIT);
VkBufferImageCopy copy = {
.bufferOffset = span.offset,
.bufferOffset = span.alloc.start,
.imageExtent.width = dim_len,
.imageExtent.height = dim_len,
.imageExtent.depth = dim_len,
@ -1299,7 +1302,7 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
struct wlr_vk_command_buffer *cb = vulkan_acquire_command_buffer(renderer);
if (cb == NULL) {
render_pass_destroy(pass);
free(pass);
return NULL;
}
@ -1310,7 +1313,7 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
if (res != VK_SUCCESS) {
wlr_vk_error("vkBeginCommandBuffer", res);
vulkan_reset_command_buffer(cb);
render_pass_destroy(pass);
free(pass);
return NULL;
}
@ -1322,14 +1325,6 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT, VK_ACCESS_SHADER_READ_BIT);
}
struct wlr_vk_render_timer *timer = NULL;
if (options != NULL && options->timer != NULL) {
timer = wl_container_of(options->timer, timer, base);
vkCmdResetQueryPool(cb->vk, timer->query_pool, 0, 2);
vkCmdWriteTimestamp(cb->vk, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
timer->query_pool, 0);
}
int width = buffer->wlr_buffer->width;
int height = buffer->wlr_buffer->height;
VkRect2D rect = { .extent = { width, height } };
@ -1348,7 +1343,6 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
.height = height,
.maxDepth = 1,
});
vkCmdSetScissor(cb->vk, 0, 1, &rect);
// matrix_projection() assumes a GL coordinate system so we need
// to pass WL_OUTPUT_TRANSFORM_FLIPPED_180 to adjust it for vulkan.
@ -1359,6 +1353,5 @@ struct wlr_vk_render_pass *vulkan_begin_render_pass(struct wlr_vk_renderer *rend
pass->render_buffer_out = buffer_out;
pass->render_setup = render_setup;
pass->command_buffer = cb;
pass->timer = timer;
return pass;
}

View file

@ -1,5 +1,6 @@
#include <assert.h>
#include <fcntl.h>
#include <math.h>
#include <poll.h>
#include <stdlib.h>
#include <stdint.h>
@ -7,7 +8,6 @@
#include <unistd.h>
#include <drm_fourcc.h>
#include <vulkan/vulkan.h>
#include <wayland-util.h>
#include <wlr/render/color.h>
#include <wlr/render/interface.h>
#include <wlr/types/wlr_drm.h>
@ -26,9 +26,11 @@
#include "render/vulkan/shaders/texture.frag.h"
#include "render/vulkan/shaders/quad.frag.h"
#include "render/vulkan/shaders/output.frag.h"
#include "util/array.h"
#include "types/wlr_buffer.h"
#include "util/time.h"
// TODO:
// - simplify stage allocation, don't track allocations but use ringbuffer-like
// - use a pipeline cache (not sure when to save though, after every pipeline
// creation?)
// - create pipelines as derivatives of each other
@ -44,7 +46,7 @@ static bool default_debug = true;
static const struct wlr_renderer_impl renderer_impl;
bool wlr_renderer_is_vk(const struct wlr_renderer *wlr_renderer) {
bool wlr_renderer_is_vk(struct wlr_renderer *wlr_renderer) {
return wlr_renderer->impl == &renderer_impl;
}
@ -185,13 +187,18 @@ static void destroy_render_format_setup(struct wlr_vk_renderer *renderer,
free(setup);
}
static void stage_buffer_destroy(struct wlr_vk_renderer *r,
struct wlr_vk_stage_buffer *buffer) {
static void shared_buffer_destroy(struct wlr_vk_renderer *r,
struct wlr_vk_shared_buffer *buffer) {
if (!buffer) {
return;
}
wl_array_release(&buffer->watermarks);
if (buffer->allocs.size > 0) {
wlr_log(WLR_ERROR, "shared_buffer_finish: %zu allocations left",
buffer->allocs.size / sizeof(struct wlr_vk_allocation));
}
wl_array_release(&buffer->allocs);
if (buffer->cpu_mapping) {
vkUnmapMemory(r->dev->dev, buffer->memory);
buffer->cpu_mapping = NULL;
@ -207,12 +214,75 @@ static void stage_buffer_destroy(struct wlr_vk_renderer *r,
free(buffer);
}
static struct wlr_vk_stage_buffer *stage_buffer_create(
struct wlr_vk_renderer *r, VkDeviceSize bsize) {
struct wlr_vk_stage_buffer *buf = calloc(1, sizeof(*buf));
struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
VkDeviceSize size, VkDeviceSize alignment) {
// try to find free span
// simple greedy allocation algorithm - should be enough for this usecase
// since all allocations are freed together after the frame
struct wlr_vk_shared_buffer *buf;
wl_list_for_each_reverse(buf, &r->stage.buffers, link) {
VkDeviceSize start = 0u;
if (buf->allocs.size > 0) {
const struct wlr_vk_allocation *allocs = buf->allocs.data;
size_t allocs_len = buf->allocs.size / sizeof(struct wlr_vk_allocation);
const struct wlr_vk_allocation *last = &allocs[allocs_len - 1];
start = last->start + last->size;
}
assert(start <= buf->buf_size);
// ensure the proposed start is a multiple of alignment
start += alignment - 1 - ((start + alignment - 1) % alignment);
if (buf->buf_size - start < size) {
continue;
}
struct wlr_vk_allocation *a = wl_array_add(&buf->allocs, sizeof(*a));
if (a == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
goto error_alloc;
}
*a = (struct wlr_vk_allocation){
.start = start,
.size = size,
};
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.alloc = *a,
};
}
if (size > max_stage_size) {
wlr_log(WLR_ERROR, "cannot vulkan stage buffer: "
"requested size (%zu bytes) exceeds maximum (%zu bytes)",
(size_t)size, (size_t)max_stage_size);
goto error_alloc;
}
// we didn't find a free buffer - create one
// size = clamp(max(size * 2, prev_size * 2), min_size, max_size)
VkDeviceSize bsize = size * 2;
bsize = bsize < min_stage_size ? min_stage_size : bsize;
if (!wl_list_empty(&r->stage.buffers)) {
struct wl_list *last_link = r->stage.buffers.prev;
struct wlr_vk_shared_buffer *prev = wl_container_of(
last_link, prev, link);
VkDeviceSize last_size = 2 * prev->buf_size;
bsize = bsize < last_size ? last_size : bsize;
}
if (bsize > max_stage_size) {
wlr_log(WLR_INFO, "vulkan stage buffers have reached max size");
bsize = max_stage_size;
}
// create buffer
buf = calloc(1, sizeof(*buf));
if (!buf) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
return NULL;
goto error_alloc;
}
wl_list_init(&buf->link);
@ -222,8 +292,7 @@ static struct wlr_vk_stage_buffer *stage_buffer_create(
.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO,
.size = bsize,
.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT |
VK_BUFFER_USAGE_TRANSFER_SRC_BIT |
VK_BUFFER_USAGE_VERTEX_BUFFER_BIT,
VK_BUFFER_USAGE_TRANSFER_SRC_BIT,
.sharingMode = VK_SHARING_MODE_EXCLUSIVE,
};
res = vkCreateBuffer(r->dev->dev, &buf_info, NULL, &buf->buffer);
@ -250,7 +319,7 @@ static struct wlr_vk_stage_buffer *stage_buffer_create(
};
res = vkAllocateMemory(r->dev->dev, &mem_info, NULL, &buf->memory);
if (res != VK_SUCCESS) {
wlr_vk_error("vkAllocateMemory", res);
wlr_vk_error("vkAllocatorMemory", res);
goto error;
}
@ -266,162 +335,34 @@ static struct wlr_vk_stage_buffer *stage_buffer_create(
goto error;
}
struct wlr_vk_allocation *a = wl_array_add(&buf->allocs, sizeof(*a));
if (a == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
goto error;
}
buf->buf_size = bsize;
return buf;
wl_list_insert(&r->stage.buffers, &buf->link);
error:
stage_buffer_destroy(r, buf);
return NULL;
}
void vulkan_stage_buffer_reclaim(struct wlr_vk_stage_buffer *buf,
uint64_t current_point) {
size_t completed = 0;
struct wlr_vk_stage_watermark *mark;
wl_array_for_each(mark, &buf->watermarks) {
if (mark->timeline_point > current_point) {
break;
}
buf->tail = mark->head;
completed++;
}
if (completed > 0) {
completed *= sizeof(struct wlr_vk_stage_watermark);
if (completed == buf->watermarks.size) {
buf->watermarks.size = 0;
} else {
array_remove_at(&buf->watermarks, 0, completed);
}
}
}
VkDeviceSize vulkan_stage_buffer_alloc(struct wlr_vk_stage_buffer *buf,
VkDeviceSize size, VkDeviceSize alignment) {
VkDeviceSize head = buf->head;
// Round up to the next multiple of alignment
VkDeviceSize rem = head % alignment;
if (rem != 0) {
head += alignment - rem;
}
VkDeviceSize end = head >= buf->tail ? buf->buf_size : buf->tail;
if (head + size < end) {
// Regular allocation head till end of available space
buf->head = head + size;
return head;
} else if (size < buf->tail && head >= buf->tail) {
// First allocation after wrap-around
buf->head = size;
return 0;
}
return (VkDeviceSize)-1;
}
struct wlr_vk_buffer_span vulkan_get_stage_span(struct wlr_vk_renderer *r,
VkDeviceSize size, VkDeviceSize alignment) {
if (size >= max_stage_size) {
wlr_log(WLR_ERROR, "cannot allocate stage buffer: "
"requested size (%zu bytes) exceeds maximum (%zu bytes)",
(size_t)size, (size_t)max_stage_size-1);
goto error;
}
VkDeviceSize max_buf_size = min_stage_size / 2;
struct wlr_vk_stage_buffer *buf;
wl_list_for_each(buf, &r->stage.buffers, link) {
VkDeviceSize offset = vulkan_stage_buffer_alloc(buf, size, alignment);
if (offset != (VkDeviceSize)-1) {
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.offset = offset,
.size = size,
};
}
if (buf->buf_size > max_buf_size) {
max_buf_size = buf->buf_size;
}
}
VkDeviceSize bsize = max_buf_size * 2;
while (size * 2 > bsize) {
bsize *= 2;
}
if (bsize > max_stage_size) {
wlr_log(WLR_INFO, "vulkan stage buffer has reached max size");
bsize = max_stage_size;
}
struct wlr_vk_stage_buffer *new_buf = stage_buffer_create(r, bsize);
if (new_buf == NULL) {
goto error;
}
wl_list_insert(r->stage.buffers.prev, &new_buf->link);
VkDeviceSize offset = vulkan_stage_buffer_alloc(new_buf, size, alignment);
assert(offset != (VkDeviceSize)-1);
return (struct wlr_vk_buffer_span) {
.buffer = new_buf,
.offset = offset,
*a = (struct wlr_vk_allocation){
.start = 0,
.size = size,
};
return (struct wlr_vk_buffer_span) {
.buffer = buf,
.alloc = *a,
};
error:
shared_buffer_destroy(r, buf);
error_alloc:
return (struct wlr_vk_buffer_span) {
.buffer = NULL,
.offset = 0,
.size = 0,
.alloc = (struct wlr_vk_allocation) {0, 0},
};
}
void vulkan_stage_mark_submit(struct wlr_vk_renderer *renderer,
uint64_t timeline_point) {
struct wlr_vk_stage_buffer *buf;
wl_list_for_each(buf, &renderer->stage.buffers, link) {
if (buf->head == buf->tail) {
continue;
}
struct wlr_vk_stage_watermark *mark = wl_array_add(
&buf->watermarks, sizeof(*mark));
if (mark == NULL) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
continue;
}
*mark = (struct wlr_vk_stage_watermark){
.head = buf->head,
.timeline_point = timeline_point,
};
}
}
static void stage_buffer_gc(struct wlr_vk_renderer *renderer, uint64_t current_point) {
struct wlr_vk_stage_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &renderer->stage.buffers, link) {
if (buf->head != buf->tail) {
buf->empty_gc_cnt = 0;
vulkan_stage_buffer_reclaim(buf, current_point);
continue;
}
if (buf->buf_size <= min_stage_size) {
// We will not deallocate the first buffer
continue;
}
buf->empty_gc_cnt++;
if (buf->empty_gc_cnt >= 1000) {
// This buffer hasn't been used for a while, so let's deallocate it
stage_buffer_destroy(renderer, buf);
}
}
}
VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer) {
if (renderer->stage.cb == NULL) {
renderer->stage.cb = vulkan_acquire_command_buffer(renderer);
@ -438,52 +379,7 @@ VkCommandBuffer vulkan_record_stage_cb(struct wlr_vk_renderer *renderer) {
return renderer->stage.cb->vk;
}
VkSemaphore vulkan_command_buffer_wait_sync_file(struct wlr_vk_renderer *renderer,
struct wlr_vk_command_buffer *render_cb, size_t sem_index, int sync_file_fd) {
VkResult res;
VkSemaphore *wait_semaphores = render_cb->wait_semaphores.data;
size_t wait_semaphores_len = render_cb->wait_semaphores.size / sizeof(wait_semaphores[0]);
VkSemaphore *sem_ptr;
if (sem_index >= wait_semaphores_len) {
sem_ptr = wl_array_add(&render_cb->wait_semaphores, sizeof(*sem_ptr));
if (sem_ptr == NULL) {
return VK_NULL_HANDLE;
}
*sem_ptr = VK_NULL_HANDLE;
} else {
sem_ptr = &wait_semaphores[sem_index];
}
if (*sem_ptr == VK_NULL_HANDLE) {
VkSemaphoreCreateInfo semaphore_info = {
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
};
res = vkCreateSemaphore(renderer->dev->dev, &semaphore_info, NULL, sem_ptr);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSemaphore", res);
return VK_NULL_HANDLE;
}
}
VkImportSemaphoreFdInfoKHR import_info = {
.sType = VK_STRUCTURE_TYPE_IMPORT_SEMAPHORE_FD_INFO_KHR,
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_SYNC_FD_BIT,
.flags = VK_SEMAPHORE_IMPORT_TEMPORARY_BIT,
.semaphore = *sem_ptr,
.fd = sync_file_fd,
};
res = renderer->dev->api.vkImportSemaphoreFdKHR(renderer->dev->dev, &import_info);
if (res != VK_SUCCESS) {
wlr_vk_error("vkImportSemaphoreFdKHR", res);
return VK_NULL_HANDLE;
}
return *sem_ptr;
}
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_file_fd) {
bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer) {
if (renderer->stage.cb == NULL) {
return false;
}
@ -496,8 +392,6 @@ bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_fi
return false;
}
VkSemaphore wait_semaphore;
VkPipelineStageFlags wait_stage = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
VkTimelineSemaphoreSubmitInfoKHR timeline_submit_info = {
.sType = VK_STRUCTURE_TYPE_TIMELINE_SEMAPHORE_SUBMIT_INFO_KHR,
.signalSemaphoreValueCount = 1,
@ -511,32 +405,16 @@ bool vulkan_submit_stage_wait(struct wlr_vk_renderer *renderer, int wait_sync_fi
.signalSemaphoreCount = 1,
.pSignalSemaphores = &renderer->timeline_semaphore,
};
if (wait_sync_file_fd != -1) {
wait_semaphore = vulkan_command_buffer_wait_sync_file(renderer, cb, 0, wait_sync_file_fd);
if (wait_semaphore == VK_NULL_HANDLE) {
return false;
}
submit_info.waitSemaphoreCount = 1;
submit_info.pWaitSemaphores = &wait_semaphore;
submit_info.pWaitDstStageMask = &wait_stage;
}
vulkan_stage_mark_submit(renderer, timeline_point);
VkResult res = vkQueueSubmit(renderer->dev->queue, 1, &submit_info, VK_NULL_HANDLE);
if (res != VK_SUCCESS) {
wlr_vk_error("vkQueueSubmit", res);
return false;
}
if (!vulkan_wait_command_buffer(cb, renderer)) {
return false;
}
// NOTE: don't release stage allocations here since they may still be
// used for reading. Will be done next frame.
// We did a blocking wait so this is now the current point
stage_buffer_gc(renderer, timeline_point);
return true;
return vulkan_wait_command_buffer(cb, renderer);
}
struct wlr_vk_format_props *vulkan_format_props_from_drm(
@ -570,6 +448,7 @@ static bool init_command_buffer(struct wlr_vk_command_buffer *cb,
.vk = vk_cb,
};
wl_list_init(&cb->destroy_textures);
wl_list_init(&cb->stage_buffers);
return true;
}
@ -595,7 +474,7 @@ bool vulkan_wait_command_buffer(struct wlr_vk_command_buffer *cb,
}
static void release_command_buffer_resources(struct wlr_vk_command_buffer *cb,
struct wlr_vk_renderer *renderer) {
struct wlr_vk_renderer *renderer, int64_t now) {
struct wlr_vk_texture *texture, *texture_tmp;
wl_list_for_each_safe(texture, texture_tmp, &cb->destroy_textures, destroy_link) {
wl_list_remove(&texture->destroy_link);
@ -603,6 +482,15 @@ static void release_command_buffer_resources(struct wlr_vk_command_buffer *cb,
wlr_texture_destroy(&texture->wlr_texture);
}
struct wlr_vk_shared_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &cb->stage_buffers, link) {
buf->allocs.size = 0;
buf->last_used_ms = now;
wl_list_remove(&buf->link);
wl_list_insert(&renderer->stage.buffers, &buf->link);
}
if (cb->color_transform) {
wlr_color_transform_unref(cb->color_transform);
cb->color_transform = NULL;
@ -621,14 +509,22 @@ static struct wlr_vk_command_buffer *get_command_buffer(
return NULL;
}
stage_buffer_gc(renderer, current_point);
// Garbage collect any buffers that have remained unused for too long
int64_t now = get_current_time_msec();
struct wlr_vk_shared_buffer *buf, *buf_tmp;
wl_list_for_each_safe(buf, buf_tmp, &renderer->stage.buffers, link) {
if (buf->allocs.size == 0 && buf->last_used_ms + 10000 < now) {
shared_buffer_destroy(renderer, buf);
}
}
// Destroy textures for completed command buffers
for (size_t i = 0; i < VULKAN_COMMAND_BUFFERS_CAP; i++) {
struct wlr_vk_command_buffer *cb = &renderer->command_buffers[i];
if (cb->vk != VK_NULL_HANDLE && !cb->recording &&
cb->timeline_point <= current_point) {
release_command_buffer_resources(cb, renderer);
release_command_buffer_resources(cb, renderer, now);
}
}
@ -1231,7 +1127,7 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
if (cb->vk == VK_NULL_HANDLE) {
continue;
}
release_command_buffer_resources(cb, renderer);
release_command_buffer_resources(cb, renderer, 0);
if (cb->binary_semaphore != VK_NULL_HANDLE) {
vkDestroySemaphore(renderer->dev->dev, cb->binary_semaphore, NULL);
}
@ -1243,9 +1139,9 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
}
// stage.cb automatically freed with command pool
struct wlr_vk_stage_buffer *buf, *tmp_buf;
struct wlr_vk_shared_buffer *buf, *tmp_buf;
wl_list_for_each_safe(buf, tmp_buf, &renderer->stage.buffers, link) {
stage_buffer_destroy(renderer, buf);
shared_buffer_destroy(renderer, buf);
}
struct wlr_vk_texture *tex, *tex_tmp;
@ -1292,7 +1188,7 @@ static void vulkan_destroy(struct wlr_renderer *wlr_renderer) {
vkDestroyPipelineLayout(dev->dev, pipeline_layout->vk, NULL);
vkDestroyDescriptorSetLayout(dev->dev, pipeline_layout->ds, NULL);
vkDestroySampler(dev->dev, pipeline_layout->sampler, NULL);
renderer->dev->api.vkDestroySamplerYcbcrConversionKHR(dev->dev, pipeline_layout->ycbcr.conversion, NULL);
vkDestroySamplerYcbcrConversion(dev->dev, pipeline_layout->ycbcr.conversion, NULL);
free(pipeline_layout);
}
@ -1322,8 +1218,7 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VkFormat src_format, VkImage src_image,
uint32_t drm_format, uint32_t stride,
uint32_t width, uint32_t height, uint32_t src_x, uint32_t src_y,
uint32_t dst_x, uint32_t dst_y, void *data,
struct wlr_drm_syncobj_timeline *wait_timeline, uint64_t wait_point) {
uint32_t dst_x, uint32_t dst_y, void *data) {
VkDevice dev = vk_renderer->dev->dev;
const struct wlr_pixel_format_info *pixel_format_info = drm_get_pixel_format_info(drm_format);
@ -1509,17 +1404,7 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_ACCESS_MEMORY_READ_BIT);
int wait_sync_file_fd = -1;
if (wait_timeline != NULL) {
wait_sync_file_fd = wlr_drm_syncobj_timeline_export_sync_file(wait_timeline, wait_point);
if (wait_sync_file_fd < 0) {
wlr_log(WLR_ERROR, "Failed to export wait timeline point as sync_file");
return false;
}
}
if (!vulkan_submit_stage_wait(vk_renderer, wait_sync_file_fd)) {
close(wait_sync_file_fd);
if (!vulkan_submit_stage_wait(vk_renderer)) {
return false;
}
@ -1600,80 +1485,6 @@ static struct wlr_render_pass *vulkan_begin_buffer_pass(struct wlr_renderer *wlr
return &render_pass->base;
}
static const struct wlr_render_timer_impl render_timer_impl;
static struct wlr_render_timer *vulkan_render_timer_create(
struct wlr_renderer *wlr_renderer) {
struct wlr_vk_renderer *renderer = vulkan_get_renderer(wlr_renderer);
if (renderer->dev->timestamp_valid_bits == 0) {
wlr_log(WLR_ERROR, "Failed to create render timer: "
"timestamp queries not supported by queue family");
return NULL;
}
struct wlr_vk_render_timer *timer = calloc(1, sizeof(*timer));
if (!timer) {
wlr_log_errno(WLR_ERROR, "Allocation failed");
return NULL;
}
VkQueryPoolCreateInfo pool_info = {
.sType = VK_STRUCTURE_TYPE_QUERY_POOL_CREATE_INFO,
.queryType = VK_QUERY_TYPE_TIMESTAMP,
.queryCount = 2,
};
VkResult res = vkCreateQueryPool(renderer->dev->dev, &pool_info,
NULL, &timer->query_pool);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateQueryPool", res);
free(timer);
return NULL;
}
timer->base.impl = &render_timer_impl;
timer->renderer = renderer;
return &timer->base;
}
static int vulkan_render_timer_get_duration_ns(
struct wlr_render_timer *wlr_timer) {
struct wlr_vk_render_timer *timer =
wl_container_of(wlr_timer, timer, base);
struct wlr_vk_renderer *renderer = timer->renderer;
// Layout: [ timestamp1, avail1, timestamp2, avail2 ]
uint64_t data[4] = {0};
VkResult res = vkGetQueryPoolResults(renderer->dev->dev,
timer->query_pool, 0, 2, sizeof(data), data,
2 * sizeof(uint64_t),
VK_QUERY_RESULT_64_BIT | VK_QUERY_RESULT_WITH_AVAILABILITY_BIT);
if (res == VK_NOT_READY || data[1] == 0 || data[3] == 0) {
wlr_log(WLR_ERROR, "Failed to get render duration: "
"timestamp query results not yet ready");
return -1;
}
if (res != VK_SUCCESS) {
wlr_vk_error("vkGetQueryPoolResults", res);
return -1;
}
uint64_t ticks = data[2] - data[0];
return (int)(ticks * renderer->dev->timestamp_period);
}
static void vulkan_render_timer_destroy(
struct wlr_render_timer *wlr_timer) {
struct wlr_vk_render_timer *timer =
wl_container_of(wlr_timer, timer, base);
vkDestroyQueryPool(timer->renderer->dev->dev, timer->query_pool, NULL);
free(timer);
}
static const struct wlr_render_timer_impl render_timer_impl = {
.get_duration_ns = vulkan_render_timer_get_duration_ns,
.destroy = vulkan_render_timer_destroy,
};
static const struct wlr_renderer_impl renderer_impl = {
.get_texture_formats = vulkan_get_texture_formats,
.get_render_formats = vulkan_get_render_formats,
@ -1681,7 +1492,6 @@ static const struct wlr_renderer_impl renderer_impl = {
.get_drm_fd = vulkan_get_drm_fd,
.texture_from_buffer = vulkan_texture_from_buffer,
.begin_buffer_pass = vulkan_begin_buffer_pass,
.render_timer_create = vulkan_render_timer_create,
};
// Initializes the VkDescriptorSetLayout and VkPipelineLayout needed
@ -1882,25 +1692,6 @@ static bool pipeline_key_equals(const struct wlr_vk_pipeline_key *a,
return true;
}
static const VkVertexInputBindingDescription instance_vert_binding = {
.binding = 0,
.stride = sizeof(float) * 4,
.inputRate = VK_VERTEX_INPUT_RATE_INSTANCE,
};
static const VkVertexInputAttributeDescription instance_vert_attr = {
.location = 0,
.binding = 0,
.format = VK_FORMAT_R32G32B32A32_SFLOAT,
.offset = 0,
};
static const VkPipelineVertexInputStateCreateInfo instance_vert_input = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
.vertexBindingDescriptionCount = 1,
.pVertexBindingDescriptions = &instance_vert_binding,
.vertexAttributeDescriptionCount = 1,
.pVertexAttributeDescriptions = &instance_vert_attr,
};
// Initializes the pipeline for rendering textures and using the given
// VkRenderPass and VkPipelineLayout.
struct wlr_vk_pipeline *setup_get_or_create_pipeline(
@ -2032,6 +1823,10 @@ struct wlr_vk_pipeline *setup_get_or_create_pipeline(
.dynamicStateCount = sizeof(dyn_states) / sizeof(dyn_states[0]),
};
VkPipelineVertexInputStateCreateInfo vertex = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
};
VkGraphicsPipelineCreateInfo pinfo = {
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.layout = pipeline_layout->vk,
@ -2046,7 +1841,7 @@ struct wlr_vk_pipeline *setup_get_or_create_pipeline(
.pMultisampleState = &multisample,
.pViewportState = &viewport,
.pDynamicState = &dynamic,
.pVertexInputState = &instance_vert_input,
.pVertexInputState = &vertex,
};
VkPipelineCache cache = VK_NULL_HANDLE;
@ -2145,6 +1940,10 @@ static bool init_blend_to_output_pipeline(struct wlr_vk_renderer *renderer,
.dynamicStateCount = sizeof(dyn_states) / sizeof(dyn_states[0]),
};
VkPipelineVertexInputStateCreateInfo vertex = {
.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,
};
VkGraphicsPipelineCreateInfo pinfo = {
.sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
.pNext = NULL,
@ -2159,7 +1958,7 @@ static bool init_blend_to_output_pipeline(struct wlr_vk_renderer *renderer,
.pMultisampleState = &multisample,
.pViewportState = &viewport,
.pDynamicState = &dynamic,
.pVertexInputState = &instance_vert_input,
.pVertexInputState = &vertex,
};
VkPipelineCache cache = VK_NULL_HANDLE;
@ -2250,10 +2049,10 @@ struct wlr_vk_pipeline_layout *get_or_create_pipeline_layout(
.yChromaOffset = VK_CHROMA_LOCATION_MIDPOINT,
.chromaFilter = VK_FILTER_LINEAR,
};
res = renderer->dev->api.vkCreateSamplerYcbcrConversionKHR(renderer->dev->dev,
res = vkCreateSamplerYcbcrConversion(renderer->dev->dev,
&conversion_create_info, NULL, &pipeline_layout->ycbcr.conversion);
if (res != VK_SUCCESS) {
wlr_vk_error("vkCreateSamplerYcbcrConversionKHR", res);
wlr_vk_error("vkCreateSamplerYcbcrConversion", res);
free(pipeline_layout);
return NULL;
}

View file

@ -8,14 +8,11 @@ layout(push_constant, row_major) uniform UBO {
vec2 uv_size;
} data;
layout(location = 0) in vec4 inst_rect;
layout(location = 0) out vec2 uv;
void main() {
vec2 pos = vec2(float((gl_VertexIndex + 1) & 2) * 0.5f,
float(gl_VertexIndex & 2) * 0.5f);
pos = inst_rect.xy + pos * inst_rect.zw;
uv = data.uv_offset + pos * data.uv_size;
gl_Position = data.proj * vec4(pos, 0.0, 1.0);
}

View file

@ -15,7 +15,7 @@
static const struct wlr_texture_impl texture_impl;
bool wlr_texture_is_vk(const struct wlr_texture *wlr_texture) {
bool wlr_texture_is_vk(struct wlr_texture *wlr_texture) {
return wlr_texture->impl == &texture_impl;
}
@ -72,16 +72,16 @@ static bool write_pixels(struct wlr_vk_texture *texture,
// get staging buffer
struct wlr_vk_buffer_span span = vulkan_get_stage_span(renderer, bsize, format_info->bytes_per_block);
if (!span.buffer || span.size != bsize) {
if (!span.buffer || span.alloc.size != bsize) {
wlr_log(WLR_ERROR, "Failed to retrieve staging buffer");
free(copies);
return false;
}
char *map = (char*)span.buffer->cpu_mapping + span.offset;
char *map = (char*)span.buffer->cpu_mapping + span.alloc.start;
// upload data
uint32_t buf_off = span.offset;
uint32_t buf_off = span.alloc.start;
for (int i = 0; i < rects_len; i++) {
pixman_box32_t rect = rects[i];
uint32_t width = rect.x2 - rect.x1;
@ -238,8 +238,7 @@ static bool vulkan_texture_read_pixels(struct wlr_texture *wlr_texture,
void *p = wlr_texture_read_pixel_options_get_data(options);
return vulkan_read_pixels(texture->renderer, texture->format->vk, texture->image,
options->format, options->stride, src.width, src.height, src.x, src.y, 0, 0, p,
options->wait_timeline, options->wait_point);
options->format, options->stride, src.width, src.height, src.x, src.y, 0, 0, p);
}
static uint32_t vulkan_texture_preferred_read_format(struct wlr_texture *wlr_texture) {
@ -330,7 +329,6 @@ struct wlr_vk_texture_view *vulkan_texture_get_or_create_view(struct wlr_vk_text
view->ds_pool = vulkan_alloc_texture_ds(texture->renderer, pipeline_layout->ds, &view->ds);
if (!view->ds_pool) {
vkDestroyImageView(dev, view->image_view, NULL);
free(view);
wlr_log(WLR_ERROR, "failed to allocate descriptor");
return NULL;
@ -655,7 +653,7 @@ VkImage vulkan_import_dmabuf(struct wlr_vk_renderer *renderer,
.sType = VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2,
};
renderer->dev->api.vkGetImageMemoryRequirements2KHR(dev, &memri, &memr);
vkGetImageMemoryRequirements2(dev, &memri, &memr);
int mem = vulkan_find_mem_type(renderer->dev, 0,
memr.memoryRequirements.memoryTypeBits & fdp.memoryTypeBits);
if (mem < 0) {
@ -714,7 +712,7 @@ VkImage vulkan_import_dmabuf(struct wlr_vk_renderer *renderer,
}
}
res = renderer->dev->api.vkBindImageMemory2KHR(dev, mem_count, bindi);
res = vkBindImageMemory2(dev, mem_count, bindi);
if (res != VK_SUCCESS) {
wlr_vk_error("vkBindMemory failed", res);
goto error_image;

View file

@ -81,6 +81,21 @@ static VKAPI_ATTR VkBool32 debug_callback(VkDebugUtilsMessageSeverityFlagBitsEXT
}
struct wlr_vk_instance *vulkan_instance_create(bool debug) {
// we require vulkan 1.1
PFN_vkEnumerateInstanceVersion pfEnumInstanceVersion =
(PFN_vkEnumerateInstanceVersion)
vkGetInstanceProcAddr(VK_NULL_HANDLE, "vkEnumerateInstanceVersion");
if (!pfEnumInstanceVersion) {
wlr_log(WLR_ERROR, "wlroots requires vulkan 1.1 which is not available");
return NULL;
}
uint32_t ini_version;
if (pfEnumInstanceVersion(&ini_version) != VK_SUCCESS ||
ini_version < VK_API_VERSION_1_1) {
wlr_log(WLR_ERROR, "wlroots requires vulkan 1.1 which is not available");
return NULL;
}
uint32_t avail_extc = 0;
VkResult res;
@ -110,18 +125,7 @@ struct wlr_vk_instance *vulkan_instance_create(bool debug) {
}
size_t extensions_len = 0;
const char *extensions[8] = {0};
extensions[extensions_len++] = VK_KHR_GET_PHYSICAL_DEVICE_PROPERTIES_2_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_EXTERNAL_SEMAPHORE_CAPABILITIES_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_EXTERNAL_MEMORY_CAPABILITIES_EXTENSION_NAME;
for (size_t i = 0; i < extensions_len; i++) {
if (!check_extension(avail_ext_props, avail_extc, extensions[i])) {
wlr_log(WLR_ERROR, "vulkan: required instance extension %s not found",
extensions[i]);
goto error;
}
}
const char *extensions[1] = {0};
bool debug_utils_found = false;
if (debug && check_extension(avail_ext_props, avail_extc,
@ -136,7 +140,7 @@ struct wlr_vk_instance *vulkan_instance_create(bool debug) {
.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
.pEngineName = "wlroots",
.engineVersion = WLR_VERSION_NUM,
.apiVersion = VK_API_VERSION_1_0,
.apiVersion = VK_API_VERSION_1_1,
};
VkInstanceCreateInfo instance_info = {
@ -278,10 +282,20 @@ VkPhysicalDevice vulkan_find_drm_phdev(struct wlr_vk_instance *ini, int drm_fd)
for (uint32_t i = 0; i < num_phdevs; ++i) {
VkPhysicalDevice phdev = phdevs[i];
// check whether device supports vulkan 1.1, needed for
// vkGetPhysicalDeviceProperties2
VkPhysicalDeviceProperties phdev_props;
vkGetPhysicalDeviceProperties(phdev, &phdev_props);
log_phdev(&phdev_props);
if (phdev_props.apiVersion < VK_API_VERSION_1_1) {
// NOTE: we could additionally check whether the
// VkPhysicalDeviceProperties2KHR extension is supported but
// implementations not supporting 1.1 are unlikely in future
continue;
}
// check for extensions
uint32_t avail_extc = 0;
res = vkEnumerateDeviceExtensionProperties(phdev, NULL,
@ -460,12 +474,6 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
extensions[extensions_len++] = VK_EXT_IMAGE_DRM_FORMAT_MODIFIER_EXTENSION_NAME;
extensions[extensions_len++] = VK_KHR_TIMELINE_SEMAPHORE_EXTENSION_NAME; // or vulkan 1.2
extensions[extensions_len++] = VK_KHR_SYNCHRONIZATION_2_EXTENSION_NAME; // or vulkan 1.3
extensions[extensions_len++] = VK_KHR_EXTERNAL_MEMORY_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_BIND_MEMORY_2_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_SAMPLER_YCBCR_CONVERSION_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_EXTERNAL_SEMAPHORE_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_MAINTENANCE_1_EXTENSION_NAME; // or vulkan 1.1
extensions[extensions_len++] = VK_KHR_GET_MEMORY_REQUIREMENTS_2_EXTENSION_NAME; // or vulkan 1.1
for (size_t i = 0; i < extensions_len; i++) {
if (!check_extension(avail_ext_props, avail_extc, extensions[i])) {
@ -488,15 +496,10 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
graphics_found = queue_props[i].queueFlags & VK_QUEUE_GRAPHICS_BIT;
if (graphics_found) {
dev->queue_family = i;
dev->timestamp_valid_bits = queue_props[i].timestampValidBits;
break;
}
}
assert(graphics_found);
VkPhysicalDeviceProperties phdev_props;
vkGetPhysicalDeviceProperties(phdev, &phdev_props);
dev->timestamp_period = phdev_props.limits.timestampPeriod;
}
bool exportable_semaphore = false, importable_semaphore = false;
@ -627,10 +630,6 @@ struct wlr_vk_device *vulkan_device_create(struct wlr_vk_instance *ini,
load_device_proc(dev, "vkGetSemaphoreCounterValueKHR",
&dev->api.vkGetSemaphoreCounterValueKHR);
load_device_proc(dev, "vkQueueSubmit2KHR", &dev->api.vkQueueSubmit2KHR);
load_device_proc(dev, "vkBindImageMemory2KHR", &dev->api.vkBindImageMemory2KHR);
load_device_proc(dev, "vkCreateSamplerYcbcrConversionKHR", &dev->api.vkCreateSamplerYcbcrConversionKHR);
load_device_proc(dev, "vkDestroySamplerYcbcrConversionKHR", &dev->api.vkDestroySamplerYcbcrConversionKHR);
load_device_proc(dev, "vkGetImageMemoryRequirements2KHR", &dev->api.vkGetImageMemoryRequirements2KHR);
if (has_external_semaphore_fd) {
load_device_proc(dev, "vkGetSemaphoreFdKHR", &dev->api.vkGetSemaphoreFdKHR);

View file

@ -1,154 +0,0 @@
#include <stdio.h>
#include <time.h>
#include <wlr/types/wlr_scene.h>
struct tree_spec {
// Parameters for the tree we'll construct
int depth;
int branching;
int rect_size;
int spread;
// Stats around the tree we built
int tree_count;
int rect_count;
int max_x;
int max_y;
};
static int max(int a, int b) {
return a > b ? a : b;
}
static double timespec_diff_msec(struct timespec *start, struct timespec *end) {
return (double)(end->tv_sec - start->tv_sec) * 1e3 +
(double)(end->tv_nsec - start->tv_nsec) / 1e6;
}
static bool build_tree(struct wlr_scene_tree *parent, struct tree_spec *spec,
int depth, int x, int y) {
if (depth == spec->depth) {
float color[4] = { 1.0f, 1.0f, 1.0f, 1.0f };
struct wlr_scene_rect *rect =
wlr_scene_rect_create(parent, spec->rect_size, spec->rect_size, color);
if (rect == NULL) {
fprintf(stderr, "wlr_scene_rect_create failed\n");
return false;
}
wlr_scene_node_set_position(&rect->node, x, y);
spec->max_x = max(spec->max_x, x + spec->rect_size);
spec->max_y = max(spec->max_y, y + spec->rect_size);
spec->rect_count++;
return true;
}
for (int i = 0; i < spec->branching; i++) {
struct wlr_scene_tree *child = wlr_scene_tree_create(parent);
if (child == NULL) {
fprintf(stderr, "wlr_scene_tree_create failed\n");
return false;
}
spec->tree_count++;
int offset = i * spec->spread;
wlr_scene_node_set_position(&child->node, offset, offset);
if (!build_tree(child, spec, depth + 1, x + offset, y + offset)) {
return false;
}
}
return true;
}
static bool bench_create_tree(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC, &start);
if (!build_tree(&scene->tree, spec, 0, 0, 0)) {
fprintf(stderr, "build_tree failed\n");
return false;
}
clock_gettime(CLOCK_MONOTONIC, &end);
printf("Built tree with %d tree nodes, %d rect nodes\n\n",
spec->tree_count, spec->rect_count);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = spec->tree_count + spec->rect_count;
printf("create test tree: %d nodes, %.3f ms, %.0f nodes/ms\n",
nodes, elapsed, nodes / elapsed);
return true;
}
static void bench_scene_node_at(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
int iters = 10000;
int hits = 0;
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i = 0; i < iters; i++) {
// Spread lookups across the tree extent
double lx = (double)(i * 97 % spec->max_x);
double ly = (double)(i * 53 % spec->max_y);
double nx, ny;
struct wlr_scene_node *node =
wlr_scene_node_at(&scene->tree.node, lx, ly, &nx, &ny);
if (node != NULL) {
hits++;
}
}
clock_gettime(CLOCK_MONOTONIC, &end);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = (spec->tree_count + spec->rect_count) * iters;
printf("wlr_scene_node_at: %d iters, %.3f ms, %.0f nodes/ms (hits: %d/%d)\n",
iters, elapsed, nodes / elapsed, hits, iters);
}
static void noop_iterator(struct wlr_scene_buffer *buffer,
int sx, int sy, void *user_data) {
(void)buffer;
(void)sx;
(void)sy;
int *cnt = user_data;
(*cnt)++;
}
static void bench_scene_node_for_each_buffer(struct wlr_scene *scene, struct tree_spec *spec) {
struct timespec start, end;
int iters = 10000;
int hits = 0;
clock_gettime(CLOCK_MONOTONIC, &start);
for (int i = 0; i < iters; i++) {
wlr_scene_node_for_each_buffer(&scene->tree.node,
noop_iterator, &hits);
}
clock_gettime(CLOCK_MONOTONIC, &end);
double elapsed = timespec_diff_msec(&start, &end);
int nodes = (spec->tree_count + spec->rect_count) * iters;
printf("wlr_scene_node_for_each_buffer: %d iters, %.3f ms, %.0f nodes/ms (hits: %d/%d)\n",
iters, elapsed, nodes / elapsed, hits, iters);
}
int main(void) {
struct wlr_scene *scene = wlr_scene_create();
if (scene == NULL) {
fprintf(stderr, "wlr_scene_create failed\n");
return 99;
}
struct tree_spec spec = {
.depth = 5,
.branching = 5,
.rect_size = 10,
.spread = 100,
};
if (!bench_create_tree(scene, &spec)) {
return 99;
}
bench_scene_node_at(scene, &spec);
bench_scene_node_for_each_buffer(scene, &spec);
wlr_scene_node_destroy(&scene->tree.node);
return 0;
}

View file

@ -1,32 +0,0 @@
# Used to test internal symbols
lib_wlr_internal = static_library(
versioned_name + '-internal',
objects: lib_wlr.extract_all_objects(recursive: false),
dependencies: wlr_deps,
include_directories: [wlr_inc],
install: false,
)
test(
'box',
executable('test-box', 'test_box.c', dependencies: wlroots),
)
if features.get('vulkan-renderer')
test(
'vulkan_stage_buffer',
executable(
'test-vulkan-stage-buffer',
'test_vulkan_stage_buffer.c',
link_with: lib_wlr_internal,
dependencies: wlr_deps,
include_directories: wlr_inc,
),
)
endif
benchmark(
'scene',
executable('bench-scene', 'bench_scene.c', dependencies: wlroots),
timeout: 30,
)

View file

@ -1,149 +0,0 @@
#include <assert.h>
#include <stddef.h>
#include <wlr/util/box.h>
static void test_box_empty(void) {
// NULL is empty
assert(wlr_box_empty(NULL));
// Zero width/height
struct wlr_box box = { .x = 0, .y = 0, .width = 0, .height = 10 };
assert(wlr_box_empty(&box));
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = 0 };
assert(wlr_box_empty(&box));
// Negative width/height
box = (struct wlr_box){ .x = 0, .y = 0, .width = -1, .height = 10 };
assert(wlr_box_empty(&box));
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = -1 };
assert(wlr_box_empty(&box));
// Valid box
box = (struct wlr_box){ .x = 0, .y = 0, .width = 10, .height = 10 };
assert(!wlr_box_empty(&box));
}
static void test_box_intersection(void) {
struct wlr_box dest;
// Overlapping
struct wlr_box a = { .x = 0, .y = 0, .width = 100, .height = 100 };
struct wlr_box b = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(wlr_box_intersection(&dest, &a, &b));
assert(dest.x == 50 && dest.y == 50 &&
dest.width == 50 && dest.height == 50);
// Non-overlapping
b = (struct wlr_box){ .x = 200, .y = 200, .width = 50, .height = 50 };
assert(!wlr_box_intersection(&dest, &a, &b));
assert(dest.width == 0 && dest.height == 0);
// Touching edges
b = (struct wlr_box){ .x = 100, .y = 0, .width = 50, .height = 50 };
assert(!wlr_box_intersection(&dest, &a, &b));
// Self-intersection
assert(wlr_box_intersection(&dest, &a, &a));
assert(dest.x == a.x && dest.y == a.y &&
dest.width == a.width && dest.height == a.height);
// Empty input
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_intersection(&dest, &a, &empty));
// NULL input
assert(!wlr_box_intersection(&dest, &a, NULL));
assert(!wlr_box_intersection(&dest, NULL, &a));
}
static void test_box_intersects_box(void) {
// Overlapping
struct wlr_box a = { .x = 0, .y = 0, .width = 100, .height = 100 };
struct wlr_box b = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(wlr_box_intersects(&a, &b));
// Non-overlapping
b = (struct wlr_box){ .x = 200, .y = 200, .width = 50, .height = 50 };
assert(!wlr_box_intersects(&a, &b));
// Touching edges
b = (struct wlr_box){ .x = 100, .y = 0, .width = 50, .height = 50 };
assert(!wlr_box_intersects(&a, &b));
// Self-intersection
assert(wlr_box_intersects(&a, &a));
// Empty input
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_intersects(&a, &empty));
// NULL input
assert(!wlr_box_intersects(&a, NULL));
assert(!wlr_box_intersects(NULL, &a));
}
static void test_box_contains_point(void) {
struct wlr_box box = { .x = 10, .y = 20, .width = 100, .height = 50 };
// Interior point
assert(wlr_box_contains_point(&box, 50, 40));
// Inclusive lower bound
assert(wlr_box_contains_point(&box, 10, 20));
// Exclusive upper bound
assert(!wlr_box_contains_point(&box, 110, 70));
assert(!wlr_box_contains_point(&box, 110, 40));
assert(!wlr_box_contains_point(&box, 50, 70));
// Outside
assert(!wlr_box_contains_point(&box, 5, 40));
assert(!wlr_box_contains_point(&box, 50, 15));
// Empty box
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_contains_point(&empty, 0, 0));
// NULL
assert(!wlr_box_contains_point(NULL, 0, 0));
}
static void test_box_contains_box(void) {
struct wlr_box outer = { .x = 0, .y = 0, .width = 100, .height = 100 };
// Fully contained
struct wlr_box inner = { .x = 10, .y = 10, .width = 50, .height = 50 };
assert(wlr_box_contains_box(&outer, &inner));
// Self-containment
assert(wlr_box_contains_box(&outer, &outer));
// Partial overlap — not contained
struct wlr_box partial = { .x = 50, .y = 50, .width = 100, .height = 100 };
assert(!wlr_box_contains_box(&outer, &partial));
// Empty inner
struct wlr_box empty = { .x = 0, .y = 0, .width = 0, .height = 0 };
assert(!wlr_box_contains_box(&outer, &empty));
// Empty outer
assert(!wlr_box_contains_box(&empty, &inner));
// NULL
assert(!wlr_box_contains_box(&outer, NULL));
assert(!wlr_box_contains_box(NULL, &outer));
}
int main(void) {
#ifdef NDEBUG
fprintf(stderr, "NDEBUG must be disabled for tests\n");
return 1;
#endif
test_box_empty();
test_box_intersection();
test_box_intersects_box();
test_box_contains_point();
test_box_contains_box();
return 0;
}

View file

@ -1,200 +0,0 @@
#include <assert.h>
#include <stdio.h>
#include <wayland-util.h>
#include "render/vulkan.h"
#define BUF_SIZE 1024
#define ALLOC_FAIL ((VkDeviceSize)-1)
static void stage_buffer_init(struct wlr_vk_stage_buffer *buf) {
*buf = (struct wlr_vk_stage_buffer){
.buf_size = BUF_SIZE,
};
wl_array_init(&buf->watermarks);
}
static void stage_buffer_finish(struct wlr_vk_stage_buffer *buf) {
wl_array_release(&buf->watermarks);
}
static void push_watermark(struct wlr_vk_stage_buffer *buf,
uint64_t timeline_point) {
struct wlr_vk_stage_watermark *mark = wl_array_add(
&buf->watermarks, sizeof(*mark));
assert(mark != NULL);
*mark = (struct wlr_vk_stage_watermark){
.head = buf->head,
.timeline_point = timeline_point,
};
}
static size_t watermark_count(const struct wlr_vk_stage_buffer *buf) {
return buf->watermarks.size / sizeof(struct wlr_vk_stage_watermark);
}
static void test_alloc_simple(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
assert(buf.head == 100);
assert(vulkan_stage_buffer_alloc(&buf, 200, 1) == 100);
assert(buf.head == 300);
assert(buf.tail == 0);
stage_buffer_finish(&buf);
}
static void test_alloc_alignment(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 7, 1) == 0);
assert(buf.head == 7);
assert(vulkan_stage_buffer_alloc(&buf, 4, 16) == 16);
assert(buf.head == 20);
assert(vulkan_stage_buffer_alloc(&buf, 8, 8) == 24);
assert(buf.head == 32);
stage_buffer_finish(&buf);
}
static void test_alloc_limit(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// We do not allow allocations that would cause head to equal tail
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE, 1) == ALLOC_FAIL);
assert(buf.head == 0);
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE-1, 1) == 0);
assert(buf.head == BUF_SIZE-1);
stage_buffer_finish(&buf);
}
static void test_alloc_wrap(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// Fill the first 924 bytes
assert(vulkan_stage_buffer_alloc(&buf, BUF_SIZE - 100, 1) == 0);
push_watermark(&buf, 1);
// Fill the end of the buffer
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == 924);
push_watermark(&buf, 2);
// First, check that we don't wrap prematurely
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == ALLOC_FAIL);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == ALLOC_FAIL);
// Free the beginning of the buffer and try to wrap again
vulkan_stage_buffer_reclaim(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 50, 1) == 0);
assert(buf.tail == 924);
assert(buf.head == 50);
// Check that freeing from the end of the buffer still works
vulkan_stage_buffer_reclaim(&buf, 2);
assert(buf.tail == 974);
assert(buf.head == 50);
// Check that allocations still work
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 50);
assert(buf.tail == 974);
assert(buf.head == 150);
stage_buffer_finish(&buf);
}
static void test_reclaim_empty(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
// Fresh buffer with no watermarks and head == tail == 0 is drained.
vulkan_stage_buffer_reclaim(&buf, 0);
assert(buf.head == buf.tail);
assert(buf.tail == 0);
stage_buffer_finish(&buf);
}
static void test_reclaim_pending_not_completed(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
// current point hasn't reached the watermark yet.
vulkan_stage_buffer_reclaim(&buf, 0);
assert(buf.head != buf.tail);
assert(buf.tail == 0);
assert(watermark_count(&buf) == 1);
stage_buffer_finish(&buf);
}
static void test_reclaim_partial(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 100);
push_watermark(&buf, 2);
// Only the first watermark is reached.
vulkan_stage_buffer_reclaim(&buf, 1);
assert(buf.head != buf.tail);
assert(buf.tail == 100);
assert(watermark_count(&buf) == 1);
const struct wlr_vk_stage_watermark *remaining = buf.watermarks.data;
assert(remaining[0].head == 200);
assert(remaining[0].timeline_point == 2);
stage_buffer_finish(&buf);
}
static void test_reclaim_all(void) {
struct wlr_vk_stage_buffer buf;
stage_buffer_init(&buf);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 0);
push_watermark(&buf, 1);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 100);
push_watermark(&buf, 2);
assert(vulkan_stage_buffer_alloc(&buf, 100, 1) == 200);
push_watermark(&buf, 3);
vulkan_stage_buffer_reclaim(&buf, 100);
assert(buf.head == buf.tail);
assert(buf.tail == 300);
assert(watermark_count(&buf) == 0);
stage_buffer_finish(&buf);
}
int main(void) {
#ifdef NDEBUG
fprintf(stderr, "NDEBUG must be disabled for tests\n");
return 1;
#endif
test_alloc_simple();
test_alloc_alignment();
test_alloc_limit();
test_alloc_wrap();
test_reclaim_empty();
test_reclaim_pending_not_completed();
test_reclaim_partial();
test_reclaim_all();
return 0;
}

View file

@ -1,6 +1,6 @@
PKG_CONFIG?=pkg-config
PKGS="wlroots-0.21" wayland-server xkbcommon
PKGS="wlroots-0.20" wayland-server xkbcommon
CFLAGS_PKG_CONFIG!=$(PKG_CONFIG) --cflags $(PKGS)
CFLAGS+=$(CFLAGS_PKG_CONFIG)
LIBS!=$(PKG_CONFIG) --libs $(PKGS)

View file

@ -17,7 +17,7 @@ foreign_toplevel_manager_from_resource(struct wl_resource *resource) {
}
bool wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_accept(
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event *request,
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request,
struct wlr_ext_image_capture_source_v1 *source) {
return wlr_ext_image_capture_source_v1_create_resource(source, request->client, request->new_id);
}
@ -34,13 +34,18 @@ static void foreign_toplevel_manager_handle_create_source(struct wl_client *clie
return;
}
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request_event request = {
.toplevel_handle = toplevel_handle,
.client = client,
.new_id = new_id,
};
struct wlr_ext_foreign_toplevel_image_capture_source_manager_v1_request *request =
calloc(1, sizeof(*request));
if (request == NULL) {
wl_resource_post_no_memory(manager_resource);
return;
}
wl_signal_emit_mutable(&manager->events.capture_request, &request);
request->toplevel_handle = toplevel_handle;
request->client = client;
request->new_id = new_id;
wl_signal_emit_mutable(&manager->events.new_request, request);
}
static void foreign_toplevel_manager_handle_destroy(struct wl_client *client,
@ -71,7 +76,7 @@ static void foreign_toplevel_manager_handle_display_destroy(struct wl_listener *
wl_container_of(listener, manager, display_destroy);
wl_signal_emit_mutable(&manager->events.destroy, NULL);
assert(wl_list_empty(&manager->events.destroy.listener_list));
assert(wl_list_empty(&manager->events.capture_request.listener_list));
assert(wl_list_empty(&manager->events.new_request.listener_list));
wl_list_remove(&manager->display_destroy.link);
wl_global_destroy(manager->global);
free(manager);
@ -97,7 +102,7 @@ wlr_ext_foreign_toplevel_image_capture_source_manager_v1_create(struct wl_displa
}
wl_signal_init(&manager->events.destroy);
wl_signal_init(&manager->events.capture_request);
wl_signal_init(&manager->events.new_request);
manager->display_destroy.notify = foreign_toplevel_manager_handle_display_destroy;
wl_display_add_destroy_listener(display, &manager->display_destroy);

View file

@ -34,14 +34,13 @@ struct scene_node_source_frame_event {
static size_t last_output_num = 0;
static void _get_scene_node_extents(struct wlr_scene_node *node, int lx, int ly,
int *x_min, int *y_min, int *x_max, int *y_max) {
static void _get_scene_node_extents(struct wlr_scene_node *node, struct wlr_box *box, int lx, int ly) {
switch (node->type) {
case WLR_SCENE_NODE_TREE:;
struct wlr_scene_tree *scene_tree = wlr_scene_tree_from_node(node);
struct wlr_scene_node *child;
wl_list_for_each(child, &scene_tree->children, link) {
_get_scene_node_extents(child, lx + child->x, ly + child->y, x_min, y_min, x_max, y_max);
_get_scene_node_extents(child, box, lx + child->x, ly + child->y);
}
break;
case WLR_SCENE_NODE_RECT:
@ -49,30 +48,27 @@ static void _get_scene_node_extents(struct wlr_scene_node *node, int lx, int ly,
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
if (node_box.x < *x_min) {
*x_min = node_box.x;
if (node_box.x < box->x) {
box->x = node_box.x;
}
if (node_box.y < *y_min) {
*y_min = node_box.y;
if (node_box.y < box->y) {
box->y = node_box.y;
}
if (node_box.x + node_box.width > *x_max) {
*x_max = node_box.x + node_box.width;
if (node_box.x + node_box.width > box->x + box->width) {
box->width = node_box.x + node_box.width - box->x;
}
if (node_box.y + node_box.height > *y_max) {
*y_max = node_box.y + node_box.height;
if (node_box.y + node_box.height > box->y + box->height) {
box->height = node_box.y + node_box.height - box->y;
}
break;
}
}
static void get_scene_node_extents(struct wlr_scene_node *node, struct wlr_box *box) {
*box = (struct wlr_box){ .x = INT_MAX, .y = INT_MAX };
int lx = 0, ly = 0;
wlr_scene_node_coords(node, &lx, &ly);
*box = (struct wlr_box){ .x = INT_MAX, .y = INT_MAX };
int x_max = INT_MIN, y_max = INT_MIN;
_get_scene_node_extents(node, lx, ly, &box->x, &box->y, &x_max, &y_max);
box->width = x_max - box->x;
box->height = y_max - box->y;
_get_scene_node_extents(node, box, lx, ly);
}
static void source_render(struct scene_node_source *source) {

View file

@ -48,7 +48,6 @@ wlr_files += files(
'wlr_data_control_v1.c',
'wlr_drm.c',
'wlr_export_dmabuf_v1.c',
'wlr_ext_background_effect_v1.c',
'wlr_ext_data_control_v1.c',
'wlr_ext_foreign_toplevel_list_v1.c',
'wlr_ext_image_copy_capture_v1.c',

View file

@ -159,8 +159,9 @@ static void output_cursor_update_visible(struct wlr_output_cursor *cursor) {
struct wlr_box cursor_box;
output_cursor_get_box(cursor, &cursor_box);
struct wlr_box intersection;
cursor->visible =
wlr_box_intersects(&output_box, &cursor_box);
wlr_box_intersection(&intersection, &output_box, &cursor_box);
}
static bool output_pick_cursor_format(struct wlr_output *output,

View file

@ -233,11 +233,6 @@ static void output_apply_state(struct wlr_output *output,
output->transform = state->transform;
}
if (state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
output->color_encoding = state->color_encoding;
output->color_range = state->color_range;
}
if (state->committed & WLR_OUTPUT_STATE_IMAGE_DESCRIPTION) {
if (state->image_description != NULL) {
output->image_description_value = *state->image_description;
@ -585,11 +580,6 @@ static uint32_t output_compare_state(struct wlr_output *output,
output->color_transform == state->color_transform) {
fields |= WLR_OUTPUT_STATE_COLOR_TRANSFORM;
}
if ((state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) &&
output->color_encoding == state->color_encoding &&
output->color_range == state->color_range) {
fields |= WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
}
return fields;
}
@ -625,7 +615,7 @@ static bool output_basic_test(struct wlr_output *output,
};
struct wlr_box dst_box;
output_state_get_buffer_dst_box(state, &dst_box);
if (!wlr_box_intersects(&output_box, &dst_box)) {
if (!wlr_box_intersection(&output_box, &output_box, &dst_box)) {
wlr_log(WLR_ERROR, "Primary buffer is entirely off-screen or 0-sized");
return false;
}
@ -642,10 +632,6 @@ static bool output_basic_test(struct wlr_output *output,
wlr_log(WLR_DEBUG, "Tried to set signal timeline without a buffer");
return false;
}
if (state->committed & WLR_OUTPUT_STATE_COLOR_REPRESENTATION) {
wlr_log(WLR_DEBUG, "Tried to set color representation without a buffer");
return false;
}
}
if (state->committed & WLR_OUTPUT_STATE_RENDER_FORMAT) {

View file

@ -141,14 +141,6 @@ bool wlr_output_state_set_image_description(struct wlr_output_state *state,
return true;
}
void wlr_output_state_set_color_encoding_and_range(
struct wlr_output_state *state,
enum wlr_color_encoding encoding, enum wlr_color_range range) {
state->committed |= WLR_OUTPUT_STATE_COLOR_REPRESENTATION;
state->color_encoding = encoding;
state->color_range = range;
}
bool wlr_output_state_copy(struct wlr_output_state *dst,
const struct wlr_output_state *src) {
struct wlr_output_state copy = *src;

View file

@ -361,9 +361,10 @@ static void handle_scene_surface_surface_commit(
// the surface anyway.
int lx, ly;
bool enabled = wlr_scene_node_coords(&scene_buffer->node, &lx, &ly);
struct wlr_output *output = get_surface_frame_pacing_output(surface->surface);
if (!wl_list_empty(&surface->surface->current.frame_callback_list) && output && enabled) {
wlr_output_schedule_frame(output);
if (!wl_list_empty(&surface->surface->current.frame_callback_list) &&
surface->buffer->primary_output != NULL && enabled) {
wlr_output_schedule_frame(surface->buffer->primary_output->output);
}
}

View file

@ -113,11 +113,24 @@ void wlr_scene_node_destroy(struct wlr_scene_node *node) {
if (node->type == WLR_SCENE_NODE_BUFFER) {
struct wlr_scene_buffer *scene_buffer = wlr_scene_buffer_from_node(node);
uint64_t active = scene_buffer->active_outputs;
if (active) {
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, &scene->outputs, link) {
if (active & (1ull << scene_output->index)) {
wl_signal_emit_mutable(&scene_buffer->events.output_leave,
scene_output);
}
}
}
scene_buffer_set_buffer(scene_buffer, NULL);
scene_buffer_set_texture(scene_buffer, NULL);
pixman_region32_fini(&scene_buffer->opaque_region);
wlr_drm_syncobj_timeline_unref(scene_buffer->wait_timeline);
assert(wl_list_empty(&scene_buffer->events.output_leave.listener_list));
assert(wl_list_empty(&scene_buffer->events.output_enter.listener_list));
assert(wl_list_empty(&scene_buffer->events.outputs_update.listener_list));
assert(wl_list_empty(&scene_buffer->events.output_sample.listener_list));
assert(wl_list_empty(&scene_buffer->events.frame_done.listener_list));
@ -225,7 +238,7 @@ static bool _scene_nodes_in_box(struct wlr_scene_node *node, struct wlr_box *box
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
if (wlr_box_intersects(&node_box, box) &&
if (wlr_box_intersection(&node_box, &node_box, box) &&
iterator(node, lx, ly, user_data)) {
return true;
}
@ -468,12 +481,28 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
(struct wlr_linux_dmabuf_feedback_v1_init_options){0};
}
uint64_t old_active = scene_buffer->active_outputs;
scene_buffer->active_outputs = active_outputs;
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, outputs, link) {
uint64_t mask = 1ull << scene_output->index;
bool intersects = active_outputs & mask;
bool intersects_before = old_active & mask;
if (intersects && !intersects_before) {
wl_signal_emit_mutable(&scene_buffer->events.output_enter, scene_output);
} else if (!intersects && intersects_before) {
wl_signal_emit_mutable(&scene_buffer->events.output_leave, scene_output);
}
}
// if there are active outputs on this node, we should always have a primary
// output
assert(!active_outputs || scene_buffer->primary_output);
assert(!scene_buffer->active_outputs || scene_buffer->primary_output);
// Skip output update event if nothing was updated
if (scene_buffer->active_outputs == active_outputs &&
if (old_active == active_outputs &&
(!force || ((1ull << force->index) & ~active_outputs)) &&
old_primary_output == scene_buffer->primary_output) {
return;
@ -486,7 +515,6 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
};
size_t i = 0;
struct wlr_scene_output *scene_output;
wl_list_for_each(scene_output, outputs, link) {
if (~active_outputs & (1ull << scene_output->index)) {
continue;
@ -496,7 +524,6 @@ static void update_node_update_outputs(struct wlr_scene_node *node,
outputs_array[i++] = scene_output;
}
scene_buffer->active_outputs = active_outputs;
wl_signal_emit_mutable(&scene_buffer->events.outputs_update, &event);
}
@ -842,6 +869,8 @@ struct wlr_scene_buffer *wlr_scene_buffer_create(struct wlr_scene_tree *parent,
scene_node_init(&scene_buffer->node, WLR_SCENE_NODE_BUFFER, parent);
wl_signal_init(&scene_buffer->events.outputs_update);
wl_signal_init(&scene_buffer->events.output_enter);
wl_signal_init(&scene_buffer->events.output_leave);
wl_signal_init(&scene_buffer->events.output_sample);
wl_signal_init(&scene_buffer->events.frame_done);
@ -2069,6 +2098,15 @@ static enum scene_direct_scanout_result scene_entry_try_direct_scanout(
return SCANOUT_INELIGIBLE;
}
bool is_color_repr_none = buffer->color_encoding == WLR_COLOR_ENCODING_NONE &&
buffer->color_range == WLR_COLOR_RANGE_NONE;
bool is_color_repr_identity_full = buffer->color_encoding == WLR_COLOR_ENCODING_IDENTITY &&
buffer->color_range == WLR_COLOR_RANGE_FULL;
if (!(is_color_repr_none || is_color_repr_identity_full)) {
return SCANOUT_INELIGIBLE;
}
// We want to ensure optimal buffer selection, but as direct-scanout can be enabled and disabled
// on a frame-by-frame basis, we wait for a few frames to send the new format recommendations.
// Maybe we should only send feedback in this case if tests fail.
@ -2111,22 +2149,11 @@ static enum scene_direct_scanout_result scene_entry_try_direct_scanout(
wlr_output_state_set_wait_timeline(&pending, buffer->wait_timeline, buffer->wait_point);
}
if (scene_output->out_timeline) {
if (scene_output->out_timeline) {
scene_output->out_point++;
wlr_output_state_set_signal_timeline(&pending, scene_output->out_timeline, scene_output->out_point);
}
if (buffer->color_encoding == WLR_COLOR_ENCODING_IDENTITY &&
buffer->color_range == WLR_COLOR_RANGE_FULL) {
// IDENTITY+FULL (used for RGB formats) is equivalent to no color
// representation being set at all.
wlr_output_state_set_color_encoding_and_range(&pending,
WLR_COLOR_ENCODING_NONE, WLR_COLOR_RANGE_NONE);
} else {
wlr_output_state_set_color_encoding_and_range(&pending,
buffer->color_encoding, buffer->color_range);
}
if (!wlr_output_test_state(scene_output->output, &pending)) {
wlr_output_state_finish(&pending);
return SCANOUT_CANDIDATE;
@ -2669,7 +2696,8 @@ static void scene_output_for_each_scene_buffer(const struct wlr_box *output_box,
struct wlr_box node_box = { .x = lx, .y = ly };
scene_node_get_size(node, &node_box.width, &node_box.height);
if (wlr_box_intersects(output_box, &node_box)) {
struct wlr_box intersection;
if (wlr_box_intersection(&intersection, output_box, &node_box)) {
struct wlr_scene_buffer *scene_buffer =
wlr_scene_buffer_from_node(node);
user_iterator(scene_buffer, lx, ly, user_data);

View file

@ -246,40 +246,40 @@ static void surface_finalize_pending(struct wlr_surface *surface) {
}
static void surface_update_damage(pixman_region32_t *buffer_damage,
struct wlr_surface_state *state) {
struct wlr_surface_state *current, struct wlr_surface_state *pending) {
pixman_region32_clear(buffer_damage);
// Copy over surface damage + buffer damage
pixman_region32_t surface_damage;
pixman_region32_init(&surface_damage);
pixman_region32_copy(&surface_damage, &state->surface_damage);
pixman_region32_copy(&surface_damage, &pending->surface_damage);
if (state->viewport.has_dst) {
if (pending->viewport.has_dst) {
int src_width, src_height;
surface_state_viewport_src_size(state, &src_width, &src_height);
float scale_x = (float)state->viewport.dst_width / src_width;
float scale_y = (float)state->viewport.dst_height / src_height;
surface_state_viewport_src_size(pending, &src_width, &src_height);
float scale_x = (float)pending->viewport.dst_width / src_width;
float scale_y = (float)pending->viewport.dst_height / src_height;
wlr_region_scale_xy(&surface_damage, &surface_damage,
1.0 / scale_x, 1.0 / scale_y);
}
if (state->viewport.has_src) {
if (pending->viewport.has_src) {
// This is lossy: do a best-effort conversion
pixman_region32_translate(&surface_damage,
floor(state->viewport.src.x),
floor(state->viewport.src.y));
floor(pending->viewport.src.x),
floor(pending->viewport.src.y));
}
wlr_region_scale(&surface_damage, &surface_damage, state->scale);
wlr_region_scale(&surface_damage, &surface_damage, pending->scale);
int width, height;
surface_state_transformed_buffer_size(state, &width, &height);
surface_state_transformed_buffer_size(pending, &width, &height);
wlr_region_transform(&surface_damage, &surface_damage,
wlr_output_transform_invert(state->transform),
wlr_output_transform_invert(pending->transform),
width, height);
pixman_region32_union(buffer_damage,
&state->buffer_damage, &surface_damage);
&pending->buffer_damage, &surface_damage);
pixman_region32_fini(&surface_damage);
}
@ -521,6 +521,8 @@ static void surface_commit_state(struct wlr_surface *surface,
surface->unmap_commit = false;
}
surface_update_damage(&surface->buffer_damage, &surface->current, next);
surface->previous.scale = surface->current.scale;
surface->previous.transform = surface->current.transform;
surface->previous.width = surface->current.width;
@ -529,7 +531,6 @@ static void surface_commit_state(struct wlr_surface *surface,
surface->previous.buffer_height = surface->current.buffer_height;
surface_state_move(&surface->current, next, surface);
surface_update_damage(&surface->buffer_damage, &surface->current);
if (invalid_buffer) {
surface_apply_damage(surface);

View file

@ -1,241 +0,0 @@
#include <assert.h>
#include <stdlib.h>
#include <wlr/types/wlr_ext_background_effect_v1.h>
#include <wlr/types/wlr_compositor.h>
#include <wlr/util/addon.h>
#include "ext-background-effect-v1-protocol.h"
#define BACKGROUND_EFFECT_VERSION 1
struct wlr_ext_background_effect_surface_v1 {
struct wl_resource *resource;
struct wlr_surface *surface;
struct wlr_addon addon;
struct wlr_surface_synced synced;
struct wlr_ext_background_effect_surface_v1_state pending, current;
};
static const struct ext_background_effect_surface_v1_interface surface_impl;
static const struct ext_background_effect_manager_v1_interface manager_impl;
static struct wlr_ext_background_effect_surface_v1 *surface_from_resource(
struct wl_resource *resource) {
assert(wl_resource_instance_of(resource,
&ext_background_effect_surface_v1_interface, &surface_impl));
return wl_resource_get_user_data(resource);
}
static void surface_destroy(struct wlr_ext_background_effect_surface_v1 *surface) {
if (surface == NULL) {
return;
}
wlr_surface_synced_finish(&surface->synced);
wlr_addon_finish(&surface->addon);
wl_resource_set_user_data(surface->resource, NULL);
free(surface);
}
static void surface_handle_resource_destroy(struct wl_resource *resource) {
struct wlr_ext_background_effect_surface_v1 *surface = surface_from_resource(resource);
surface_destroy(surface);
}
static void surface_handle_destroy(struct wl_client *client, struct wl_resource *resource) {
wl_resource_destroy(resource);
}
static void surface_handle_set_blur_region(struct wl_client *client, struct wl_resource *resource,
struct wl_resource *region_resource) {
struct wlr_ext_background_effect_surface_v1 *surface = surface_from_resource(resource);
if (surface == NULL) {
wl_resource_post_error(resource,
EXT_BACKGROUND_EFFECT_SURFACE_V1_ERROR_SURFACE_DESTROYED,
"The wl_surface object has been destroyed");
return;
}
if (region_resource != NULL) {
const pixman_region32_t *region = wlr_region_from_resource(region_resource);
pixman_region32_copy(&surface->pending.blur_region, region);
} else {
pixman_region32_clear(&surface->pending.blur_region);
}
}
static const struct ext_background_effect_surface_v1_interface surface_impl = {
.destroy = surface_handle_destroy,
.set_blur_region = surface_handle_set_blur_region,
};
static void surface_synced_init_state(void *_state) {
struct wlr_ext_background_effect_surface_v1_state *state = _state;
pixman_region32_init(&state->blur_region);
}
static void surface_synced_finish_state(void *_state) {
struct wlr_ext_background_effect_surface_v1_state *state = _state;
pixman_region32_fini(&state->blur_region);
}
static void surface_synced_move_state(void *_dst, void *_src) {
struct wlr_ext_background_effect_surface_v1_state *dst = _dst;
struct wlr_ext_background_effect_surface_v1_state *src = _src;
pixman_region32_copy(&dst->blur_region, &src->blur_region);
}
static const struct wlr_surface_synced_impl surface_synced_impl = {
.state_size = sizeof(struct wlr_ext_background_effect_surface_v1_state),
.init_state = surface_synced_init_state,
.finish_state = surface_synced_finish_state,
.move_state = surface_synced_move_state,
};
static void surface_addon_destroy(struct wlr_addon *addon) {
struct wlr_ext_background_effect_surface_v1 *surface = wl_container_of(addon, surface, addon);
surface_destroy(surface);
}
static const struct wlr_addon_interface surface_addon_impl = {
.name = "ext_background_effect_surface_v1",
.destroy = surface_addon_destroy,
};
static struct wlr_ext_background_effect_surface_v1 *surface_from_wlr_surface(
struct wlr_surface *wlr_surface) {
struct wlr_addon *addon = wlr_addon_find(&wlr_surface->addons, NULL, &surface_addon_impl);
if (addon == NULL) {
return NULL;
}
struct wlr_ext_background_effect_surface_v1 *surface =
wl_container_of(addon, surface, addon);
return surface;
}
static void manager_handle_destroy(struct wl_client *client, struct wl_resource *resource) {
wl_resource_destroy(resource);
}
static void manager_handle_get_background_effect(struct wl_client *client,
struct wl_resource *manager_resource, uint32_t id, struct wl_resource *surface_resource) {
struct wlr_surface *wlr_surface = wlr_surface_from_resource(surface_resource);
if (surface_from_wlr_surface(wlr_surface) != NULL) {
wl_resource_post_error(manager_resource,
EXT_BACKGROUND_EFFECT_MANAGER_V1_ERROR_BACKGROUND_EFFECT_EXISTS,
"The wl_surface object already has a ext_background_effect_surface_v1 object");
return;
}
struct wlr_ext_background_effect_surface_v1 *surface = calloc(1, sizeof(*surface));
if (surface == NULL) {
wl_resource_post_no_memory(manager_resource);
return;
}
if (!wlr_surface_synced_init(&surface->synced, wlr_surface, &surface_synced_impl,
&surface->pending, &surface->current)) {
free(surface);
wl_resource_post_no_memory(manager_resource);
return;
}
uint32_t version = wl_resource_get_version(manager_resource);
surface->resource =
wl_resource_create(client, &ext_background_effect_surface_v1_interface, version, id);
if (surface->resource == NULL) {
wlr_surface_synced_finish(&surface->synced);
free(surface);
wl_resource_post_no_memory(manager_resource);
return;
}
wl_resource_set_implementation(surface->resource, &surface_impl, surface,
surface_handle_resource_destroy);
surface->surface = wlr_surface;
wlr_addon_init(&surface->addon, &wlr_surface->addons, NULL, &surface_addon_impl);
}
static const struct ext_background_effect_manager_v1_interface manager_impl = {
.destroy = manager_handle_destroy,
.get_background_effect = manager_handle_get_background_effect,
};
static void manager_handle_resource_destroy(struct wl_resource *resource) {
wl_list_remove(wl_resource_get_link(resource));
}
static void manager_bind(struct wl_client *wl_client, void *data, uint32_t version, uint32_t id) {
struct wlr_ext_background_effect_manager_v1 *manager = data;
struct wl_resource *resource = wl_resource_create(wl_client,
&ext_background_effect_manager_v1_interface, version, id);
if (resource == NULL) {
wl_client_post_no_memory(wl_client);
return;
}
wl_resource_set_implementation(resource, &manager_impl, manager,
manager_handle_resource_destroy);
wl_list_insert(&manager->resources, wl_resource_get_link(resource));
ext_background_effect_manager_v1_send_capabilities(resource, manager->capabilities);
}
static void handle_display_destroy(struct wl_listener *listener, void *data) {
struct wlr_ext_background_effect_manager_v1 *manager =
wl_container_of(listener, manager, display_destroy);
wl_signal_emit_mutable(&manager->events.destroy, NULL);
assert(wl_list_empty(&manager->events.destroy.listener_list));
wl_global_destroy(manager->global);
wl_list_remove(&manager->display_destroy.link);
free(manager);
}
struct wlr_ext_background_effect_manager_v1 *wlr_ext_background_effect_manager_v1_create(
struct wl_display *display, uint32_t version, uint32_t capabilities) {
assert(version <= BACKGROUND_EFFECT_VERSION);
struct wlr_ext_background_effect_manager_v1 *manager = calloc(1, sizeof(*manager));
if (manager == NULL) {
return NULL;
}
manager->global = wl_global_create(display, &ext_background_effect_manager_v1_interface, version,
manager, manager_bind);
if (manager->global == NULL) {
free(manager);
return NULL;
}
manager->capabilities = capabilities;
wl_signal_init(&manager->events.destroy);
wl_list_init(&manager->resources);
manager->display_destroy.notify = handle_display_destroy;
wl_display_add_destroy_listener(display, &manager->display_destroy);
return manager;
}
const struct wlr_ext_background_effect_surface_v1_state *
wlr_ext_background_effect_v1_get_surface_state(struct wlr_surface *wlr_surface) {
struct wlr_ext_background_effect_surface_v1 *surface =
surface_from_wlr_surface(wlr_surface);
if (surface == NULL) {
return NULL;
}
return &surface->current;
}

View file

@ -168,7 +168,7 @@ wlr_ext_foreign_toplevel_handle_v1_create(struct wlr_ext_foreign_toplevel_list_v
return NULL;
}
wl_list_insert(list->toplevels.prev, &toplevel->link);
wl_list_insert(&list->toplevels, &toplevel->link);
toplevel->list = list;
if (state->app_id) {
toplevel->app_id = strdup(state->app_id);

View file

@ -790,7 +790,7 @@ struct wlr_ext_workspace_handle_v1 *wlr_ext_workspace_handle_v1_create(
wl_array_init(&workspace->coordinates);
wl_signal_init(&workspace->events.destroy);
wl_list_insert(manager->workspaces.prev, &workspace->link);
wl_list_insert(&manager->workspaces, &workspace->link);
struct wlr_ext_workspace_manager_v1_resource *manager_res;
wl_list_for_each(manager_res, &manager->resources, link) {

View file

@ -530,7 +530,7 @@ wlr_foreign_toplevel_handle_v1_create(
return NULL;
}
wl_list_insert(manager->toplevels.prev, &toplevel->link);
wl_list_insert(&manager->toplevels, &toplevel->link);
toplevel->manager = manager;
wl_list_init(&toplevel->resources);

View file

@ -84,16 +84,6 @@ void wlr_keyboard_notify_modifiers(struct wlr_keyboard *keyboard,
uint32_t mods_depressed, uint32_t mods_latched, uint32_t mods_locked,
uint32_t group) {
if (keyboard->xkb_state == NULL) {
if (keyboard->modifiers.depressed != mods_depressed ||
keyboard->modifiers.latched != mods_latched ||
keyboard->modifiers.locked != mods_locked ||
keyboard->modifiers.group != group) {
keyboard->modifiers.depressed = mods_depressed;
keyboard->modifiers.latched = mods_latched;
keyboard->modifiers.locked = mods_locked;
keyboard->modifiers.group = group;
wl_signal_emit_mutable(&keyboard->events.modifiers, keyboard);
}
return;
}
xkb_state_update_mask(keyboard->xkb_state, mods_depressed, mods_latched,

View file

@ -394,7 +394,6 @@ static void layer_surface_role_commit(struct wlr_surface *wlr_surface) {
layer_surface_reset(surface);
assert(!surface->initialized);
assert(!surface->initial_commit);
surface->initial_commit = false;
} else {
surface->initial_commit = !surface->initialized;

View file

@ -12,7 +12,6 @@
#include <xf86drm.h>
#include "config.h"
#include "linux-drm-syncobj-v1-protocol.h"
#include "render/dmabuf.h"
#include "render/drm_syncobj_merger.h"
#define LINUX_DRM_SYNCOBJ_V1_VERSION 1
@ -387,7 +386,6 @@ static void manager_handle_import_timeline(struct wl_client *client,
struct wl_resource *timeline_resource = wl_resource_create(client,
&wp_linux_drm_syncobj_timeline_v1_interface, version, id);
if (timeline_resource == NULL) {
wlr_drm_syncobj_timeline_unref(timeline);
wl_resource_post_no_memory(resource);
return;
}
@ -542,12 +540,3 @@ bool wlr_linux_drm_syncobj_v1_state_add_release_point(
return wlr_drm_syncobj_merger_add(state->release_merger,
release_timeline, release_point, event_loop);
}
bool wlr_linux_drm_syncobj_v1_state_add_release_from_implicit_sync(
struct wlr_linux_drm_syncobj_surface_v1_state *state,
struct wlr_buffer *buffer, struct wl_event_loop *event_loop) {
if (state->release_merger == NULL) {
return true;
}
return wlr_drm_syncobj_merger_add_dmabuf(state->release_merger, buffer, event_loop);
}

View file

@ -260,12 +260,14 @@ bool wlr_output_layout_contains_point(struct wlr_output_layout *layout,
bool wlr_output_layout_intersects(struct wlr_output_layout *layout,
struct wlr_output *reference, const struct wlr_box *target_lbox) {
struct wlr_box out_box;
if (reference == NULL) {
struct wlr_output_layout_output *l_output;
wl_list_for_each(l_output, &layout->outputs, link) {
struct wlr_box output_box;
output_layout_output_get_box(l_output, &output_box);
if (wlr_box_intersects(&output_box, target_lbox)) {
if (wlr_box_intersection(&out_box, &output_box, target_lbox)) {
return true;
}
}
@ -279,7 +281,7 @@ bool wlr_output_layout_intersects(struct wlr_output_layout *layout,
struct wlr_box output_box;
output_layout_output_get_box(l_output, &output_box);
return wlr_box_intersects(&output_box, target_lbox);
return wlr_box_intersection(&out_box, &output_box, target_lbox);
}
}

View file

@ -52,7 +52,7 @@ static void virtual_keyboard_keymap(struct wl_client *client,
if (data == MAP_FAILED) {
goto fd_fail;
}
struct xkb_keymap *keymap = xkb_keymap_new_from_buffer(context, data, size,
struct xkb_keymap *keymap = xkb_keymap_new_from_string(context, data,
XKB_KEYMAP_FORMAT_TEXT_V1, XKB_KEYMAP_COMPILE_NO_FLAGS);
munmap(data, size);
if (!keymap) {

View file

@ -320,7 +320,6 @@ static void xdg_surface_role_commit(struct wlr_surface *wlr_surface) {
reset_xdg_surface_role_object(surface);
reset_xdg_surface(surface);
assert(!surface->initialized);
assert(!surface->initial_commit);
surface->initial_commit = false;
} else {

View file

@ -102,15 +102,6 @@ bool wlr_box_contains_box(const struct wlr_box *bigger, const struct wlr_box *sm
smaller->y + smaller->height <= bigger->y + bigger->height;
}
bool wlr_box_intersects(const struct wlr_box *a, const struct wlr_box *b) {
if (wlr_box_empty(a) || wlr_box_empty(b)) {
return false;
}
return a->x < b->x + b->width && b->x < a->x + a->width &&
a->y < b->y + b->height && b->y < a->y + a->height;
}
void wlr_box_transform(struct wlr_box *dest, const struct wlr_box *box,
enum wl_output_transform transform, int width, int height) {
struct wlr_box src = {0};

View file

@ -1,15 +1,15 @@
#include <limits.h>
#include "util/rect_union.h"
static void box_union(pixman_box32_t *dst, const pixman_box32_t *box) {
dst->x1 = dst->x1 < box->x1 ? dst->x1 : box->x1;
dst->y1 = dst->y1 < box->y1 ? dst->y1 : box->y1;
dst->x2 = dst->x2 > box->x2 ? dst->x2 : box->x2;
dst->y2 = dst->y2 > box->y2 ? dst->y2 : box->y2;
static void box_union(pixman_box32_t *dst, pixman_box32_t box) {
dst->x1 = dst->x1 < box.x1 ? dst->x1 : box.x1;
dst->y1 = dst->y1 < box.y1 ? dst->y1 : box.y1;
dst->x2 = dst->x2 > box.x2 ? dst->x2 : box.x2;
dst->y2 = dst->y2 > box.y2 ? dst->y2 : box.y2;
}
static bool box_empty_or_invalid(const pixman_box32_t *box) {
return box->x1 >= box->x2 || box->y1 >= box->y2;
static bool box_empty_or_invalid(pixman_box32_t box) {
return box.x1 >= box.x2 || box.y1 >= box.y2;
}
void rect_union_init(struct rect_union *ru) {
@ -37,28 +37,20 @@ static void handle_alloc_failure(struct rect_union *ru) {
wl_array_init(&ru->unsorted);
}
void rect_union_add(struct rect_union *ru, const pixman_box32_t *box) {
void rect_union_add(struct rect_union *ru, pixman_box32_t box) {
if (box_empty_or_invalid(box)) {
return;
}
box_union(&ru->bounding_box, box);
if (ru->alloc_failure) {
return;
}
int nrects = (int)(ru->unsorted.size / sizeof(pixman_box32_t));
if (nrects >= 1024) {
handle_alloc_failure(ru);
return;
}
pixman_box32_t *entry = wl_array_add(&ru->unsorted, sizeof(*entry));
if (entry) {
*entry = *box;
} else {
handle_alloc_failure(ru);
if (!ru->alloc_failure) {
pixman_box32_t *entry = wl_array_add(&ru->unsorted, sizeof(*entry));
if (entry) {
*entry = box;
} else {
handle_alloc_failure(ru);
}
}
}
@ -89,7 +81,7 @@ const pixman_region32_t *rect_union_evaluate(struct rect_union *ru) {
return &ru->region;
bounding_box:
pixman_region32_fini(&ru->region);
if (box_empty_or_invalid(&ru->bounding_box)) {
if (box_empty_or_invalid(ru->bounding_box)) {
pixman_region32_init(&ru->region);
} else {
pixman_region32_init_with_extents(&ru->region, &ru->bounding_box);

View file

@ -249,7 +249,7 @@ struct x11_data_source {
static const struct wlr_data_source_impl data_source_impl;
bool data_source_is_xwayland(
const struct wlr_data_source *wlr_source) {
struct wlr_data_source *wlr_source) {
return wlr_source->impl == &data_source_impl;
}
@ -292,7 +292,7 @@ static const struct wlr_primary_selection_source_impl
primary_selection_source_impl;
bool primary_selection_source_is_xwayland(
const struct wlr_primary_selection_source *wlr_source) {
struct wlr_primary_selection_source *wlr_source) {
return wlr_source->impl == &primary_selection_source_impl;
}

View file

@ -1157,10 +1157,12 @@ bool wlr_xwayland_surface_fetch_icon(
return false;
}
bool ok = xcb_ewmh_get_wm_icon_from_reply(icon_reply, reply);
free(reply);
if (!xcb_ewmh_get_wm_icon_from_reply(icon_reply, reply)) {
free(reply);
return false;
}
return ok;
return true;
}
static xcb_get_property_cookie_t get_property(struct wlr_xwm *xwm,