render: when double-buffering, pre-apply previous frame's damage early

Foot likes it when compositor releases buffer immediately, as that
means we only have to re-render the cells that have changed since the
last frame.

For various reasons, not all compositors do this. In this case, foot
is typically forced to switch between two buffers, i.e. double-buffer.

In this case, each frame starts with copying over the damage from the
previous frame, to the new frame. Then we start rendering the updated
cells.

Bringing over the previous frame's damage can be slow, if the changed
area was large (e.g. when scrolling one or a few lines, or on full
screen updates). It's also done single-threaded. Thus it not only
slows down frame rendering, but pauses everything else (i.e. input
processing). All in all, it reduces performance and increases input
latency.

But we don't have to wait until it's time to render a frame to copy
over the previous frame's damage. We can do that as soon as the
compositor has released the buffer (for the frame _before_ the
previous frame). And we can do this in a thread.

This frees up foot to continue processing input, and reduces frame
rendering time since we can now start rendering the modified cells
immediately, without first doing a large memcpy(3).

In worst case scenarios (or perhaps we should consider them best case
scenarios...), I've seen up to a 10x performance increase in frame
rendering times (this obviously does *not* include the time it takes
to copy over the previous frame's damage, since that doesn't affect
neither input processing nor frame rendering).

Implemented by adding a callback mechanism to the shm abstraction
layer. Use it for the grid buffers, and kick off a thread that copies
the previous frame's damage, and resets the buffers age to 0 (so that
foot understands it can start render to it immediately when it later
needs to render a frame).

Since we have certain way of knowing if a compositor releases buffers
immediately or not, use a bit of heuristics; if we see 10 consecutive
non-immediate releases (that is, we reset the counter as soon as we do
see an immediate release), this new "pre-apply damage" logic is
enabled. It can be force-disabled with tweak.pre-apply-damage=no.

We also need to take care to wait for the thread before resetting the
render's "last_buf" pointer (or we'll SEGFAULT in the thread...).

We must also ensure we wait for the thread to finish before we start
rendering a new frame. Under normal circumstances, the wait time is
always 0, the thread has almost always finished long before we need to
render the next frame. But it _can_ happen.

Closes #2188
This commit is contained in:
Daniel Eklöf 2025-10-05 10:48:36 +02:00
parent bb314425ef
commit 299186a654
No known key found for this signature in database
GPG key ID: 5BBD4992C116573F
11 changed files with 287 additions and 26 deletions

24
shm.c
View file

@ -87,6 +87,9 @@ struct buffer_private {
bool with_alpha;
bool scrollable;
void (*release_cb)(struct buffer *buf, void *data);
void *cb_data;
};
struct buffer_chain {
@ -100,6 +103,9 @@ struct buffer_chain {
pixman_format_code_t pixman_fmt_with_alpha;
enum wl_shm_format shm_format_with_alpha;
void (*release_cb)(struct buffer *buf, void *data);
void *cb_data;
};
static tll(struct buffer_private *) deferred;
@ -232,6 +238,10 @@ buffer_release(void *data, struct wl_buffer *wl_buffer)
xassert(found);
if (!found)
LOG_WARN("deferred delete: buffer not on the 'deferred' list");
} else {
if (buffer->release_cb != NULL) {
buffer->release_cb(&buffer->public, buffer->cb_data);
}
}
}
@ -516,6 +526,8 @@ get_new_buffers(struct buffer_chain *chain, size_t count,
.offset = 0,
.size = sizes[i],
.scrollable = chain->scrollable,
.release_cb = chain->release_cb,
.cb_data = chain->cb_data,
};
if (!instantiate_offset(buf, offset)) {
@ -623,7 +635,7 @@ shm_get_buffer(struct buffer_chain *chain, int width, int height, bool with_alph
* reuse. Pick the "youngest" one, and mark the
* other one for purging */
if (buf->public.age < cached->public.age) {
shm_unref(&cached->public);
//shm_unref(&cached->public);
cached = buf;
} else {
/*
@ -634,8 +646,8 @@ shm_get_buffer(struct buffer_chain *chain, int width, int height, bool with_alph
* should be safe; "our" tll_foreach() already
* holds the next pointer.
*/
if (buffer_unref_no_remove_from_chain(buf))
tll_remove(chain->bufs, it);
//if (buffer_unref_no_remove_from_chain(buf))
// tll_remove(chain->bufs, it);
}
}
}
@ -994,7 +1006,8 @@ shm_unref(struct buffer *_buf)
struct buffer_chain *
shm_chain_new(struct wayland *wayl, bool scrollable, size_t pix_instances,
enum shm_bit_depth desired_bit_depth)
enum shm_bit_depth desired_bit_depth,
void (*release_cb)(struct buffer *buf, void *data), void *cb_data)
{
pixman_format_code_t pixman_fmt_without_alpha = PIXMAN_x8r8g8b8;
enum wl_shm_format shm_fmt_without_alpha = WL_SHM_FORMAT_XRGB8888;
@ -1090,6 +1103,9 @@ shm_chain_new(struct wayland *wayl, bool scrollable, size_t pix_instances,
.pixman_fmt_with_alpha = pixman_fmt_with_alpha,
.shm_format_with_alpha = shm_fmt_with_alpha,
.release_cb = release_cb,
.cb_data = cb_data,
};
return chain;
}