Compare commits

...

27 commits

Author SHA1 Message Date
Simon Ser
5bc39071d1 build: bump version to 0.18.1 2024-09-20 12:51:54 +02:00
Simon Ser
6f2ce4766f render/vulkan: use non-coherent memory for read_pixels()
The spec for VkMemoryPropertyFlagBits says:

> device coherent accesses may be slower than equivalent accesses
> without device coherence [...] it is generally inadvisable to
> use device coherent or device uncached memory except when really
> needed

We don't really need coherent memory so let's not require it and
invalidate the memory range after mapping instead.

Closes: https://gitlab.freedesktop.org/wlroots/wlroots/-/issues/3868
(cherry picked from commit 52dce29e06)
2024-08-23 13:42:57 -04:00
Simon Ser
f43ac6cf9c backend/drm: use CRTCs in-order
When lighting up a new connector, we'd use the last CRTC instead of the
first one. This causes issues because drivers have the expectation that
userspace will match CRTCs to connectors in-order [1].

The order has regressed a long time ago in 5b13b8a12c ("backend/drm:
consider continue not using resources"). That commit was a fix to
avoid moving a connector between CRTCs [2]. Revert that commit and
use a different approach: even if we've found a solution, always try
not using a CRTC in the hope that we'll find another solution with
less CRTC replacements.

[1]: https://lore.kernel.org/dri-devel/20240612141903.17219-2-ville.syrjala@linux.intel.com/
[2]: https://github.com/swaywm/wlroots/issues/1230

Closes: https://gitlab.freedesktop.org/wlroots/wlroots/-/issues/3098
(cherry picked from commit d2a5dbe104)
2024-08-23 09:30:33 -04:00
Kirill Primak
0a4cd88637 scene: resize damage ring on geometry update
(cherry picked from commit cf93d31736)
2024-08-23 09:30:19 -04:00
Kirill Primak
b79fc11df8 scene: update output geom on commit after dropping pending damage
Otherwise the whole output damage gets ignored.

(cherry picked from commit 62cc96b3a4)
2024-08-23 09:30:07 -04:00
Dudemanguy
1f96bcc1db backend/drm: fix a use-after-free
The page_flip can be destroyed, but it is unconditionally accessed later
on when setting present_flags. Fix this by simply setting the
present_flags before the page_flip gets destroyed.

(cherry picked from commit 3d2f09bace)
2024-08-23 09:29:52 -04:00
zhoulei
0992422493 xwayland/xwm: listen shell destroy signal
Otherwise we got invaild write in wl_list_remove.

Fixes: e209fe2d0 ("Fix memory leak in xwayland.c")

Signed-off-by: zhoulei <zhoulei@kylinos.cn>
(cherry picked from commit 2c64f36e88)
2024-08-23 09:29:38 -04:00
Leonardo Hernández Hernández
4900daa787 linux-drm-syncobj-v1: actually use the requested version
(cherry picked from commit baaec88e2f)
2024-08-23 09:29:27 -04:00
Alexander Orzechowski
72a290ba01 wlr_scene: Fix WLR_SCENE_DEBUG_DAMAGE_HIGHLIGHT when output is transformed
(cherry picked from commit 4f1104654f)
2024-08-15 11:27:42 -04:00
Alexander Orzechowski
055c0d28d1 wlr_scene: Don't special case swapchain buffers
This fixes direct scanout VRR. As direct scanout buffers are not part
of the swapchain, we would mistakenly union instead of subtract the damage
meaning it will just accumulate indefinitely.

The reason for this existing in the first place is for compositors that
might want to sidestep scene and commit their own buffers to the output.
In this case, scene could theoretically acknowledge that and update the
damage. Except, this really didn't work because WLR_OUTPUT_STATE_DAMAGE
would need to be defined which is optional. This patch also properly
acknowledges commits without damage.

In the use case of a weird compositor that might want to sidestep scene,
they can just trash the damage ring themselves.

Fixes: #3871
(cherry picked from commit 14e1987f50)
2024-08-15 11:27:36 -04:00
Alexander Orzechowski
a4cafc1ef5 wlr_scene: Inline output_state_apply_damage
(cherry picked from commit 3e1358fec9)
2024-08-15 11:27:28 -04:00
Alexander Orzechowski
f9de859194 wlr_scene: Immediately apply pending output commit damage
There were two problems with the old implementation:
1. wlr_scene_output_commit would bail early if a frame wasn't requested
and there was no commit damage, however commit damage could never accumulate
until rendering happens. The check was subtly wrong as a result.
2. Previously, we would fill the pending commit damage based on the
current state of the damage ring. However, during direct scanout, the
damage would accumulate which would mean we would submit damage from
previous frames even if we didn't need to.

(cherry picked from commit 147c5c37e3)
2024-08-15 11:27:21 -04:00
Alexander Orzechowski
43388cd277 wlr_scene: Funnel all damage operations through scene_output_damage
We want to add logic to this function later

(cherry picked from commit 78dfa4f06d)
2024-08-15 11:27:12 -04:00
Isaac Freund
89e1ea130d backend/drm: don't set vsync present flag if page flip was async
(cherry picked from commit 08495d2596)
2024-08-15 11:13:05 -04:00
Kirill Primak
b4bec0cd3a backend/wayland: process initial events from globals correctly
Previous logic could lead wlr_wl_backend.drm_render_name being written
to twice, causing a memory leak. This commit fixes the race condition.

(cherry picked from commit 3103ea3af9)
2024-08-15 11:06:50 -04:00
Kirill Primak
7df7b0e092 linux-drm-syncobj: add missing decls in the header
(cherry picked from commit ee21deb458)
2024-08-15 11:06:02 -04:00
Kirill Primak
a095120b7d pointer-constraints: don't init/finish current/pending states
wlr_surface_synced does it automatically.

Reported-by: llyyr <llyyr.public@gmail.com>
(cherry picked from commit 70c99460ca)
2024-08-15 11:05:44 -04:00
Kirill Primak
9e107e3c77 xdg-popup: don't set a role resource destroy handler
wlr_xdg_surface tracks role resource destruction itself.

(cherry picked from commit c52e01e85f)
2024-08-15 11:05:32 -04:00
Consolatis
490769f2a6 ext-foreign-toplevel-list: use correct interface and add missing handler
Without this patch, a client calling handle.destroy() will trigger
an assert in libwayland due to a NULL pointer for the destroy handler.

Also implement a missing .destroy handler for the manager itself
and delay destruction of the manager resource from the .stop handler
to the .destroy handler.

(cherry picked from commit adf9d8b0be)
2024-08-15 11:05:20 -04:00
project-repo
2b8f94cf09 Fix memory leak in xwayland.c
(cherry picked from commit e209fe2d05)
2024-08-12 10:08:27 -04:00
project-repo
52834f29ad Fix memory leak in drm.c
(cherry picked from commit 3cae2a2c01)
2024-08-12 10:08:20 -04:00
Alexander Orzechowski
03f06207f0 wlr_scene: Force blend mode to PREMULTIPLIED if calculate visibility is disabled
We do it here so WLR_SCENE_HIGHLIGHT_TRANSPARENT_REGION doesn't break

(cherry picked from commit 4481c6b243)
2024-08-06 08:09:40 -04:00
Kirill Primak
81a08aeeb0 output-power-management: send zwlr_output_power_v1.failed on output destroy
From the event description:

This event indicates that the output power management mode control is no
longer valid. This can happen for a number of reasons, including:
<...>
- The output disappeared

(cherry picked from commit de574ac098)
2024-08-06 08:09:28 -04:00
chenyongxing
6cc80472cb render/vulkan: Fix draw rect clip region invalid in blend none mod
(cherry picked from commit 015bb8512e)
2024-08-06 08:09:15 -04:00
Isaac Freund
2005cc0fd6 docs: update comments for wlr_output API changes
The old wlr_output_{commit,test}() functions are still mentioned in
multiple places.

(cherry picked from commit 7550e483ae)
2024-07-15 09:58:22 -04:00
Isaac Freund
7d0f337a35 wlr_output: remove dead function
(cherry picked from commit 2a8a23c467)
2024-07-15 09:58:22 -04:00
Bill Li
4534421279 ci: use package x11-servers/xwayland instead of x11-servers/xwayland-devel
(cherry picked from commit 22adc65586)
2024-07-15 09:58:22 -04:00
22 changed files with 178 additions and 127 deletions

View file

@ -20,7 +20,7 @@ packages:
- x11/xcb-util-errors
- x11/xcb-util-renderutil
- x11/xcb-util-wm
- x11-servers/xwayland-devel
- x11-servers/xwayland
- sysutils/libdisplay-info
- sysutils/seatd
- gmake

View file

@ -396,6 +396,7 @@ void finish_drm_resources(struct wlr_drm_backend *drm) {
struct wlr_drm_plane *plane = &drm->planes[i];
drm_plane_finish_surface(plane);
wlr_drm_format_set_finish(&plane->formats);
free(plane->cursor_sizes);
}
free(drm->planes);
@ -607,6 +608,7 @@ static bool drm_commit(struct wlr_drm_backend *drm,
if (page_flip == NULL) {
return false;
}
page_flip->async = (flags & DRM_MODE_PAGE_FLIP_ASYNC);
}
bool ok = drm->iface->commit(drm, state, page_flip, flags, test_only);
@ -2008,6 +2010,12 @@ static void handle_page_flip(int fd, unsigned seq,
if (conn != NULL) {
conn->pending_page_flip = NULL;
}
uint32_t present_flags = WLR_OUTPUT_PRESENT_HW_CLOCK | WLR_OUTPUT_PRESENT_HW_COMPLETION;
if (!page_flip->async) {
present_flags |= WLR_OUTPUT_PRESENT_VSYNC;
}
if (page_flip->connectors_len == 0) {
drm_page_flip_destroy(page_flip);
}
@ -2038,8 +2046,6 @@ static void handle_page_flip(int fd, unsigned seq,
drm_fb_move(&layer->current_fb, &layer->queued_fb);
}
uint32_t present_flags = WLR_OUTPUT_PRESENT_VSYNC |
WLR_OUTPUT_PRESENT_HW_CLOCK | WLR_OUTPUT_PRESENT_HW_COMPLETION;
/* Don't report ZERO_COPY in multi-gpu situations, because we had to copy
* data between the GPUs, even if we were using the direct scanout
* interface.

View file

@ -170,12 +170,6 @@ static bool match_obj_(struct match_state *st, size_t skips, size_t score, size_
has_best = true;
}
}
if (st->orig[i] == UNMATCHED) {
st->res[i] = UNMATCHED;
if (match_obj_(st, skips, score, replaced, i + 1)) {
has_best = true;
}
}
if (st->exit_early) {
return true;
}
@ -211,13 +205,13 @@ static bool match_obj_(struct match_state *st, size_t skips, size_t score, size_
}
}
if (has_best) {
return true;
}
// Maybe this resource can't be matched
st->res[i] = UNMATCHED;
return match_obj_(st, skips, score, replaced, i + 1);
if (match_obj_(st, skips, score, replaced, i + 1)) {
has_best = true;
}
return has_best;
}
size_t match_obj(size_t num_objs, const uint32_t objs[static restrict num_objs],

View file

@ -178,7 +178,9 @@ static void linux_dmabuf_feedback_v1_handle_main_device(void *data,
"falling back to primary node", name);
}
feedback_data->backend->drm_render_name = strdup(name);
struct wlr_wl_backend *wl = feedback_data->backend;
assert(wl->drm_render_name == NULL);
wl->drm_render_name = strdup(name);
drmFreeDevice(&device);
}
@ -305,6 +307,7 @@ static char *get_render_name(const char *name) {
static void legacy_drm_handle_device(void *data, struct wl_drm *drm,
const char *name) {
struct wlr_wl_backend *wl = data;
assert(wl->drm_render_name == NULL);
wl->drm_render_name = get_render_name(name);
}
@ -621,6 +624,8 @@ struct wlr_backend *wlr_wl_backend_create(struct wl_event_loop *loop,
goto error_registry;
}
wl_display_roundtrip(wl->remote_display); // process initial event bursts
struct zwp_linux_dmabuf_feedback_v1 *linux_dmabuf_feedback_v1 = NULL;
struct wlr_wl_linux_dmabuf_feedback_v1 feedback_data = { .backend = wl };
if (wl->zwp_linux_dmabuf_v1 != NULL &&
@ -638,15 +643,17 @@ struct wlr_backend *wlr_wl_backend_create(struct wl_event_loop *loop,
if (wl->legacy_drm != NULL) {
wl_drm_destroy(wl->legacy_drm);
wl->legacy_drm = NULL;
free(wl->drm_render_name);
wl->drm_render_name = NULL;
}
}
wl_display_roundtrip(wl->remote_display); // get linux-dmabuf formats
wl_display_roundtrip(wl->remote_display); // get linux-dmabuf feedback events
if (feedback_data.format_table != NULL) {
munmap(feedback_data.format_table, feedback_data.format_table_size);
}
if (feedback_data.format_table != NULL) {
munmap(feedback_data.format_table, feedback_data.format_table_size);
}
if (linux_dmabuf_feedback_v1 != NULL) {
zwp_linux_dmabuf_feedback_v1_destroy(linux_dmabuf_feedback_v1);
}

View file

@ -87,7 +87,7 @@ static void output_handle_frame(struct wl_listener *listener, void *data) {
layers_arr.size / sizeof(struct wlr_output_layer_state));
if (!wlr_output_test_state(output->wlr_output, &output_state)) {
wlr_log(WLR_ERROR, "wlr_output_test() failed");
wlr_log(WLR_ERROR, "wlr_output_test_state() failed");
return;
}

View file

@ -164,6 +164,8 @@ struct wlr_drm_page_flip {
struct wl_list link; // wlr_drm_connector.page_flips
struct wlr_drm_page_flip_connector *connectors;
size_t connectors_len;
// True if DRM_MODE_PAGE_FLIP_ASYNC was set
bool async;
};
struct wlr_drm_page_flip_connector {

View file

@ -12,6 +12,9 @@
#include <wayland-server-core.h>
#include <wlr/util/addon.h>
struct wlr_buffer;
struct wlr_surface;
struct wlr_linux_drm_syncobj_surface_v1_state {
struct wlr_drm_syncobj_timeline *acquire_timeline;
uint64_t acquire_point;

View file

@ -121,8 +121,9 @@ struct wlr_render_pass;
* The `frame` event will be emitted when it is a good time for the compositor
* to submit a new frame.
*
* To render a new frame, compositors should call wlr_output_begin_render_pass(),
* perform rendering on that render pass and finally call wlr_output_commit().
* To render a new frame compositors should call wlr_output_begin_render_pass(),
* perform rendering on that render pass, and finally call
* wlr_output_commit_state().
*/
struct wlr_output {
const struct wlr_output_impl *impl;
@ -280,7 +281,7 @@ void wlr_output_destroy_global(struct wlr_output *output);
* the allocator and renderer to different values.
*
* Call this function prior to any call to wlr_output_begin_render_pass(),
* wlr_output_commit() or wlr_output_cursor_create().
* wlr_output_commit_state() or wlr_output_cursor_create().
*
* The buffer capabilities of the provided must match the capabilities of the
* output's backend. Returns false otherwise.
@ -369,12 +370,6 @@ void wlr_output_lock_attach_render(struct wlr_output *output, bool lock);
* a lock.
*/
void wlr_output_lock_software_cursors(struct wlr_output *output, bool lock);
/**
* Renders software cursors. This is a utility function that can be called when
* compositors render.
*/
void wlr_output_render_software_cursors(struct wlr_output *output,
const pixman_region32_t *damage);
/**
* Render software cursors.
*

View file

@ -23,16 +23,16 @@
*
* To configure output layers, callers should call wlr_output_layer_create() to
* create layers, attach struct wlr_output_layer_state onto
* struct wlr_output_state via wlr_output_set_layers() to describe their new
* state, and commit the output via wlr_output_commit().
* struct wlr_output_state via wlr_output_state_set_layers() to describe their new
* state, and commit the output via wlr_output_commit_state().
*
* Backends may have arbitrary limitations when it comes to displaying output
* layers. Backends indicate whether or not a layer can be displayed via
* wlr_output_layer_state.accepted after wlr_output_test() or
* wlr_output_commit() is called. Compositors using the output layers API
* directly are expected to setup layers, call wlr_output_test(), paint the
* layers that the backend rejected with the renderer, then call
* wlr_output_commit().
* wlr_output_layer_state.accepted after wlr_output_test_state() or
* wlr_output_commit_state() is called. Compositors using the output layers API
* directly are expected to setup layers, call wlr_output_test_state(), paint
* the layers that the backend rejected with the renderer, then call
* wlr_output_commit_state().
*
* Callers are responsible for disabling output layers when they need the full
* output contents to be composited onto a single buffer, e.g. during screen
@ -72,9 +72,9 @@ struct wlr_output_layer_state {
// to damage the whole buffer.
const pixman_region32_t *damage;
// Populated by the backend after wlr_output_test() and wlr_output_commit(),
// indicates whether the backend has acknowledged and will take care of
// displaying the layer
// Populated by the backend after wlr_output_test_state() and
// wlr_output_commit_state(), indicates whether the backend has acknowledged
// and will take care of displaying the layer
bool accepted;
};

View file

@ -89,7 +89,7 @@ void wlr_presentation_event_from_output(struct wlr_presentation_event *event,
*
* Instead of calling wlr_presentation_surface_sampled() and managing the
* struct wlr_presentation_feedback itself, the compositor can call this function
* before a wlr_output_commit() call to indicate that the surface's current
* before a wlr_output_commit_state() call to indicate that the surface's current
* contents have been copied to a buffer which will be displayed on the output.
*/
void wlr_presentation_surface_textured_on_output(struct wlr_surface *surface,

View file

@ -135,6 +135,7 @@ struct wlr_xwm {
struct wl_listener compositor_new_surface;
struct wl_listener compositor_destroy;
struct wl_listener shell_v1_new_surface;
struct wl_listener shell_v1_destroy;
struct wl_listener seat_set_selection;
struct wl_listener seat_set_primary_selection;
struct wl_listener seat_start_drag;

View file

@ -1,7 +1,7 @@
project(
'wlroots',
'c',
version: '0.18.0',
version: '0.18.1',
license: 'MIT',
meson_version: '>=0.59.0',
default_options: [

View file

@ -572,16 +572,10 @@ static void render_pass_add_rect(struct wlr_render_pass *wlr_pass,
},
};
VkClearRect clear_rect = {
.rect = {
.offset = { box.x, box.y },
.extent = { box.width, box.height },
},
.layerCount = 1,
};
for (int i = 0; i < clip_rects_len; i++) {
VkRect2D rect;
convert_pixman_box_to_vk_rect(&clip_rects[i], &rect);
vkCmdSetScissor(cb, 0, 1, &rect);
convert_pixman_box_to_vk_rect(&clip_rects[i], &clear_rect.rect);
vkCmdClearAttachments(cb, 1, &clear_att, 1, &clear_rect);
}
break;

View file

@ -1224,7 +1224,6 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
int mem_type = vulkan_find_mem_type(vk_renderer->dev,
VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT |
VK_MEMORY_PROPERTY_HOST_COHERENT_BIT |
VK_MEMORY_PROPERTY_HOST_CACHED_BIT,
mem_reqs.memoryTypeBits);
if (mem_type < 0) {
@ -1361,6 +1360,19 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
return false;
}
VkMappedMemoryRange mem_range = {
.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE,
.memory = dst_img_memory,
.offset = 0,
.size = VK_WHOLE_SIZE,
};
res = vkInvalidateMappedMemoryRanges(dev, 1, &mem_range);
if (res != VK_SUCCESS) {
wlr_vk_error("vkInvalidateMappedMemoryRanges", res);
vkUnmapMemory(dev, dst_img_memory);
return false;
}
const char *d = (const char *)v + img_sub_layout.offset;
unsigned char *p = (unsigned char *)data + dst_y * stride;
uint32_t bytes_per_pixel = pixel_format_info->bytes_per_block;
@ -1376,6 +1388,7 @@ bool vulkan_read_pixels(struct wlr_vk_renderer *vk_renderer,
vkUnmapMemory(dev, dst_img_memory);
// Don't need to free anything else, since memory and image are cached
return true;
free_memory:
vkFreeMemory(dev, dst_img_memory, NULL);
destroy_image:

View file

@ -315,6 +315,43 @@ static void transform_output_box(struct wlr_box *box, const struct render_data *
wlr_box_transform(box, box, transform, data->trans_width, data->trans_height);
}
static void scene_output_damage(struct wlr_scene_output *scene_output,
const pixman_region32_t *region) {
if (wlr_damage_ring_add(&scene_output->damage_ring, region)) {
wlr_output_schedule_frame(scene_output->output);
struct wlr_output *output = scene_output->output;
enum wl_output_transform transform =
wlr_output_transform_invert(scene_output->output->transform);
int width = output->width;
int height = output->height;
if (transform & WL_OUTPUT_TRANSFORM_90) {
width = output->height;
height = output->width;
}
pixman_region32_t frame_damage;
pixman_region32_init(&frame_damage);
wlr_region_transform(&frame_damage, region, transform, width, height);
pixman_region32_union(&scene_output->pending_commit_damage,
&scene_output->pending_commit_damage, &frame_damage);
pixman_region32_intersect_rect(&scene_output->pending_commit_damage,
&scene_output->pending_commit_damage, 0, 0, output->width, output->height);
pixman_region32_fini(&frame_damage);
}
}
static void scene_output_damage_whole(struct wlr_scene_output *scene_output) {
struct wlr_damage_ring *ring = &scene_output->damage_ring;
pixman_region32_t damage;
pixman_region32_init_rect(&damage, 0, 0, ring->width, ring->height);
scene_output_damage(scene_output, &damage);
pixman_region32_fini(&damage);
}
static void scene_damage_outputs(struct wlr_scene *scene, pixman_region32_t *damage) {
if (!pixman_region32_not_empty(damage)) {
return;
@ -328,9 +365,7 @@ static void scene_damage_outputs(struct wlr_scene *scene, pixman_region32_t *dam
pixman_region32_translate(&output_damage,
-scene_output->x, -scene_output->y);
scale_output_damage(&output_damage, scene_output->output->scale);
if (wlr_damage_ring_add(&scene_output->damage_ring, &output_damage)) {
wlr_output_schedule_frame(scene_output->output);
}
scene_output_damage(scene_output, &output_damage);
pixman_region32_fini(&output_damage);
}
}
@ -800,9 +835,7 @@ void wlr_scene_buffer_set_buffer_with_damage(struct wlr_scene_buffer *scene_buff
pixman_region32_translate(&output_damage,
(int)round((lx - scene_output->x) * output_scale),
(int)round((ly - scene_output->y) * output_scale));
if (wlr_damage_ring_add(&scene_output->damage_ring, &output_damage)) {
wlr_output_schedule_frame(scene_output->output);
}
scene_output_damage(scene_output, &output_damage);
pixman_region32_fini(&output_damage);
}
@ -1226,7 +1259,7 @@ static void scene_entry_render(struct render_list_entry *entry, const struct ren
struct wlr_texture *texture = scene_buffer_get_texture(scene_buffer,
data->output->output->renderer);
if (texture == NULL) {
wlr_damage_ring_add(&data->output->damage_ring, &render_region);
scene_output_damage(data->output, &render_region);
break;
}
@ -1242,7 +1275,8 @@ static void scene_entry_render(struct render_list_entry *entry, const struct ren
.clip = &render_region,
.alpha = &scene_buffer->opacity,
.filter_mode = scene_buffer->filter_mode,
.blend_mode = pixman_region32_not_empty(&opaque) ?
.blend_mode = !data->output->scene->calculate_visibility ||
pixman_region32_not_empty(&opaque) ?
WLR_RENDER_BLEND_MODE_PREMULTIPLIED : WLR_RENDER_BLEND_MODE_NONE,
});
@ -1312,8 +1346,11 @@ static void scene_node_output_update(struct wlr_scene_node *node,
static void scene_output_update_geometry(struct wlr_scene_output *scene_output,
bool force_update) {
wlr_damage_ring_add_whole(&scene_output->damage_ring);
wlr_output_schedule_frame(scene_output->output);
int ring_width, ring_height;
wlr_output_transformed_resolution(scene_output->output, &ring_width, &ring_height);
wlr_damage_ring_set_bounds(&scene_output->damage_ring, ring_width, ring_height);
scene_output_damage_whole(scene_output);
scene_node_output_update(&scene_output->scene->tree.node,
&scene_output->scene->outputs, NULL, force_update ? scene_output : NULL);
@ -1325,6 +1362,19 @@ static void scene_output_handle_commit(struct wl_listener *listener, void *data)
struct wlr_output_event_commit *event = data;
const struct wlr_output_state *state = event->state;
// if the output has been committed with a certain damage, we know that region
// will be acknowledged by the backend so we don't need to keep track of it
// anymore
if (state->committed & WLR_OUTPUT_STATE_BUFFER) {
if (state->committed & WLR_OUTPUT_STATE_DAMAGE) {
pixman_region32_subtract(&scene_output->pending_commit_damage,
&scene_output->pending_commit_damage, &state->damage);
} else {
pixman_region32_fini(&scene_output->pending_commit_damage);
pixman_region32_init(&scene_output->pending_commit_damage);
}
}
bool force_update = state->committed & (
WLR_OUTPUT_STATE_TRANSFORM |
WLR_OUTPUT_STATE_SCALE |
@ -1335,28 +1385,6 @@ static void scene_output_handle_commit(struct wl_listener *listener, void *data)
scene_output_update_geometry(scene_output, force_update);
}
// if the output has been committed with a certain damage, we know that region
// will be acknowledged by the backend so we don't need to keep track of it
// anymore
if (state->committed & WLR_OUTPUT_STATE_DAMAGE) {
bool tracking_buffer = false;
struct wlr_damage_ring_buffer *buffer;
wl_list_for_each(buffer, &scene_output->damage_ring.buffers, link) {
if (buffer->buffer == state->buffer) {
tracking_buffer = true;
break;
}
}
if (tracking_buffer) {
pixman_region32_subtract(&scene_output->pending_commit_damage,
&scene_output->pending_commit_damage, &state->damage);
} else {
pixman_region32_union(&scene_output->pending_commit_damage,
&scene_output->pending_commit_damage, &state->damage);
}
}
if (scene_output->scene->debug_damage_option == WLR_SCENE_DEBUG_DAMAGE_HIGHLIGHT &&
!wl_list_empty(&scene_output->damage_highlight_regions)) {
wlr_output_schedule_frame(scene_output->output);
@ -1367,9 +1395,7 @@ static void scene_output_handle_damage(struct wl_listener *listener, void *data)
struct wlr_scene_output *scene_output = wl_container_of(listener,
scene_output, output_damage);
struct wlr_output_event_damage *event = data;
if (wlr_damage_ring_add(&scene_output->damage_ring, event->damage)) {
wlr_output_schedule_frame(scene_output->output);
}
scene_output_damage(scene_output, event->damage);
}
static void scene_output_handle_needs_frame(struct wl_listener *listener, void *data) {
@ -1556,21 +1582,6 @@ static bool construct_render_list_iterator(struct wlr_scene_node *node,
return false;
}
static void output_state_apply_damage(const struct render_data *data,
struct wlr_output_state *state) {
struct wlr_scene_output *output = data->output;
pixman_region32_t frame_damage;
pixman_region32_init(&frame_damage);
pixman_region32_copy(&frame_damage, &output->damage_ring.current);
transform_output_damage(&frame_damage, data);
pixman_region32_union(&output->pending_commit_damage,
&output->pending_commit_damage, &frame_damage);
pixman_region32_fini(&frame_damage);
wlr_output_state_set_damage(state, &output->pending_commit_damage);
}
static void scene_buffer_send_dmabuf_feedback(const struct wlr_scene *scene,
struct wlr_scene_buffer *scene_buffer,
const struct wlr_linux_dmabuf_feedback_v1_init_options *options) {
@ -1749,7 +1760,7 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
if (state->committed & WLR_OUTPUT_STATE_TRANSFORM) {
if (render_data.transform != state->transform) {
wlr_damage_ring_add_whole(&scene_output->damage_ring);
scene_output_damage_whole(scene_output);
}
render_data.transform = state->transform;
@ -1757,7 +1768,7 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
if (state->committed & WLR_OUTPUT_STATE_SCALE) {
if (render_data.scale != state->scale) {
wlr_damage_ring_add_whole(&scene_output->damage_ring);
scene_output_damage_whole(scene_output);
}
render_data.scale = state->scale;
@ -1791,7 +1802,7 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
render_data.trans_width, render_data.trans_height);
if (debug_damage == WLR_SCENE_DEBUG_DAMAGE_RERENDER) {
wlr_damage_ring_add_whole(&scene_output->damage_ring);
scene_output_damage_whole(scene_output);
}
struct timespec now;
@ -1828,11 +1839,11 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
}
}
wlr_damage_ring_add(&scene_output->damage_ring, &acc_damage);
scene_output_damage(scene_output, &acc_damage);
pixman_region32_fini(&acc_damage);
}
output_state_apply_damage(&render_data, state);
wlr_output_state_set_damage(state, &scene_output->pending_commit_damage);
// We only want to try direct scanout if:
// - There is only one entry in the render list
@ -1969,11 +1980,18 @@ bool wlr_scene_output_build_state(struct wlr_scene_output *scene_output,
int64_t time_diff_ms = timespec_to_msec(&time_diff);
float alpha = 1.0 - (double)time_diff_ms / HIGHLIGHT_DAMAGE_FADEOUT_TIME;
pixman_region32_t clip;
pixman_region32_init(&clip);
pixman_region32_copy(&clip, &damage->region);
transform_output_damage(&clip, &render_data);
wlr_render_pass_add_rect(render_pass, &(struct wlr_render_rect_options){
.box = { .width = buffer->width, .height = buffer->height },
.color = { .r = alpha * 0.5, .g = 0, .b = 0, .a = alpha * 0.5 },
.clip = &damage->region,
.clip = &clip,
});
pixman_region32_fini(&clip);
}
}

View file

@ -12,14 +12,18 @@
#define FOREIGN_TOPLEVEL_LIST_V1_VERSION 1
static const struct ext_foreign_toplevel_list_v1_interface toplevel_handle_impl;
static const struct ext_foreign_toplevel_handle_v1_interface toplevel_handle_impl;
static void foreign_toplevel_handle_destroy(struct wl_client *client,
struct wl_resource *resource) {
assert(wl_resource_instance_of(resource,
&ext_foreign_toplevel_handle_v1_interface,
&toplevel_handle_impl));
wl_resource_destroy(resource);
}
static const struct ext_foreign_toplevel_list_v1_interface toplevel_handle_impl = {
static const struct ext_foreign_toplevel_handle_v1_interface toplevel_handle_impl = {
.destroy = foreign_toplevel_handle_destroy,
};
@ -191,12 +195,23 @@ static void foreign_toplevel_list_handle_stop(struct wl_client *client,
&foreign_toplevel_list_impl));
ext_foreign_toplevel_list_v1_send_finished(resource);
wl_list_remove(wl_resource_get_link(resource));
wl_list_init(wl_resource_get_link(resource));
}
static void foreign_toplevel_list_handle_destroy(struct wl_client *client,
struct wl_resource *resource) {
assert(wl_resource_instance_of(resource,
&ext_foreign_toplevel_list_v1_interface,
&foreign_toplevel_list_impl));
wl_resource_destroy(resource);
}
static const struct ext_foreign_toplevel_list_v1_interface
foreign_toplevel_list_impl = {
.stop = foreign_toplevel_list_handle_stop
.stop = foreign_toplevel_list_handle_stop,
.destroy = foreign_toplevel_list_handle_destroy
};
static void foreign_toplevel_list_resource_destroy(

View file

@ -424,6 +424,8 @@ static bool check_syncobj_eventfd(int drm_fd) {
struct wlr_linux_drm_syncobj_manager_v1 *wlr_linux_drm_syncobj_manager_v1_create(
struct wl_display *display, uint32_t version, int drm_fd) {
assert(version <= LINUX_DRM_SYNCOBJ_V1_VERSION);
if (!check_syncobj_eventfd(drm_fd)) {
wlr_log(WLR_INFO, "DRM syncobj eventfd unavailable, disabling linux-drm-syncobj-v1");
return NULL;
@ -441,7 +443,7 @@ struct wlr_linux_drm_syncobj_manager_v1 *wlr_linux_drm_syncobj_manager_v1_create
manager->global = wl_global_create(display,
&wp_linux_drm_syncobj_manager_v1_interface,
LINUX_DRM_SYNCOBJ_V1_VERSION, manager, manager_bind);
version, manager, manager_bind);
if (manager->global == NULL) {
goto error_drm_fd;
}

View file

@ -46,6 +46,7 @@ static void output_power_handle_output_destroy(struct wl_listener *listener,
void *data) {
struct wlr_output_power_v1 *output_power =
wl_container_of(listener, output_power, output_destroy_listener);
zwlr_output_power_v1_send_failed(output_power->resource);
output_power_destroy(output_power);
}

View file

@ -53,8 +53,6 @@ static void pointer_constraint_destroy(struct wlr_pointer_constraint_v1 *constra
wl_list_remove(&constraint->surface_commit.link);
wl_list_remove(&constraint->surface_destroy.link);
wl_list_remove(&constraint->seat_destroy.link);
pixman_region32_fini(&constraint->current.region);
pixman_region32_fini(&constraint->pending.region);
pixman_region32_fini(&constraint->region);
free(constraint);
}
@ -258,9 +256,6 @@ static void pointer_constraint_create(struct wl_client *client,
pixman_region32_init(&constraint->region);
pixman_region32_init(&constraint->pending.region);
pixman_region32_init(&constraint->current.region);
pointer_constraint_set_region(constraint, region_resource);
pointer_constraint_commit(constraint);

View file

@ -361,15 +361,6 @@ static const struct wlr_surface_synced_impl surface_synced_impl = {
.state_size = sizeof(struct wlr_xdg_popup_state),
};
static void xdg_popup_handle_resource_destroy(struct wl_resource *resource) {
struct wlr_xdg_popup *popup =
wlr_xdg_popup_from_resource(resource);
if (popup == NULL) {
return;
}
wlr_xdg_popup_destroy(popup);
}
void create_xdg_popup(struct wlr_xdg_surface *surface, struct wlr_xdg_surface *parent,
struct wlr_xdg_positioner *positioner, uint32_t id) {
if (!wlr_xdg_positioner_is_complete(positioner)) {
@ -409,8 +400,7 @@ void create_xdg_popup(struct wlr_xdg_surface *surface, struct wlr_xdg_surface *p
goto error_synced;
}
wl_resource_set_implementation(surface->popup->resource,
&xdg_popup_implementation, surface->popup,
xdg_popup_handle_resource_destroy);
&xdg_popup_implementation, surface->popup, NULL);
surface->role = WLR_XDG_SURFACE_ROLE_POPUP;

View file

@ -91,6 +91,7 @@ void wlr_xwayland_destroy(struct wlr_xwayland *xwayland) {
}
xwayland->server = NULL;
wlr_xwayland_shell_v1_destroy(xwayland->shell_v1);
xwm_destroy(xwayland->xwm);
free(xwayland);
}

View file

@ -1786,6 +1786,16 @@ static void handle_shell_v1_new_surface(struct wl_listener *listener,
}
}
static void handle_shell_v1_destroy(struct wl_listener *listener,
void *data) {
struct wlr_xwm *xwm =
wl_container_of(listener, xwm, shell_v1_destroy);
wl_list_remove(&xwm->shell_v1_new_surface.link);
wl_list_remove(&xwm->shell_v1_destroy.link);
wl_list_init(&xwm->shell_v1_new_surface.link);
wl_list_init(&xwm->shell_v1_destroy.link);
}
void wlr_xwayland_surface_activate(struct wlr_xwayland_surface *xsurface,
bool activated) {
struct wlr_xwayland_surface *focused = xsurface->xwm->focus_surface;
@ -1913,6 +1923,7 @@ void xwm_destroy(struct wlr_xwm *xwm) {
wl_list_remove(&xwm->compositor_new_surface.link);
wl_list_remove(&xwm->compositor_destroy.link);
wl_list_remove(&xwm->shell_v1_new_surface.link);
wl_list_remove(&xwm->shell_v1_destroy.link);
xcb_disconnect(xwm->xcb_conn);
struct pending_startup_id *pending, *next;
@ -2257,6 +2268,9 @@ struct wlr_xwm *xwm_create(struct wlr_xwayland *xwayland, int wm_fd) {
xwm->shell_v1_new_surface.notify = handle_shell_v1_new_surface;
wl_signal_add(&xwayland->shell_v1->events.new_surface,
&xwm->shell_v1_new_surface);
xwm->shell_v1_destroy.notify = handle_shell_v1_destroy;
wl_signal_add(&xwayland->shell_v1->events.destroy,
&xwm->shell_v1_destroy);
xwm_create_wm_window(xwm);