We still use the primary font, but use a custom size, based on the
title bar’s height.
This fixes an issue where the window title could be way too small, or
way too big. And changed size when the terminal font size was changed.
Set clip region in render_osd(). This ensures we don’t step outside
the pixman buffer when rendering the glyphs.
Furthermore, don’t ignore the alpha channel in the background color.
Move render_osd() to make it visible to the render_csd_*() functions.
Up until now, *all* buffers have been tracked in a single, global
buffer list. We've used 'cookies' to separate buffers from different
contexts (so that shm_get_buffer() doesn't try to re-use e.g. a
search-box buffer for the main grid).
This patch refactors this, and completely removes the global
list.
Instead of cookies, we now use 'chains'. A chain tracks both the
properties to apply to newly created buffers (scrollable, number of
pixman instances to instantiate etc), as well as the instantiated
buffers themselves.
This means there's strictly speaking not much use for shm_fini()
anymore, since its up to the chain owner to call shm_chain_free(),
which will also purge all buffers.
However, since purging a buffer may be deferred, if the buffer is
owned by the compositor at the time of the call to shm_purge() or
shm_chain_free(), we still keep a global 'deferred' list, on to which
deferred buffers are pushed. shm_fini() iterates this list and
destroys the buffers _even_ if they are still owned by the
compositor. This only happens at program termination, and not when
destroying a terminal instance. I.e. closing a window in a “foot
--server” does *not* trigger this.
Each terminal instatiates a number of chains, and these chains are
destroyed when the terminal instance is destroyed. Note that some
buffers may be put on the deferred list, as mentioned above.
The initial ref-count is either 1 or 0, depending on whether the
buffer is supposed to be released "immeidately" (meaning, as soon as
the compositor releases it).
Two new user facing functions have been added: shm_addref() and
shm_unref().
Our renderer now uses these two functions instead of manually setting
and clearing the 'locked' attribute.
shm_unref() will decrement the ref-counter, and destroy the buffer
when the counter reaches zero. Except if the buffer is currently
"busy" (compositor owned), in which case destruction is deferred to
the release event. The buffer is still removed from the list though.
This patch adds a `confined` flag to each cell to track if the last
rendered glyph bled into it's right neighbor. To keep things simple,
bleeding into any other neighbor cell than the immediate right one is
not allowed. This should cover most use cases.
Before rendering a row we now do a prepass and mark all cells unclean
that are affected by a bleeding neighbor. If there are consecutive
bleeding cells, the whole group must be re-rendered even if only a
single cell has changed.
The patch also deprecates both old overflowing glyph options
*allow-overflowing-double-width-glyphs* and *pua-double-width* in favor
of a single new one named *overflowing-glyphs*.
If we have lots of URLs, we use up a *lot* of SHM buffers (and thus
pools). Each and every one is a single mmap(), of at least 4K.
Since all URL labels are created and destroyed at the same time, it
makes sense to use a single pool for all buffers.
To do this, we must now do two passes; first one to generate the
actual string (label content), and to calculate the label sizes.
Then we use this information to allocate all SHM buffers at once, from
the same pool.
Finally, we iterate the URLs again, this time to actually render them.
shm_get_many() always returns new buffers (i.e. never old, cached
ones). The newly allocated buffers are also marked for immediate
purging, meaning they’ll be destroyed on the next call to either
shm_get_buffer(), or shm_get_many().
Furthermore, we add a new attribute, ‘locked’, to the buffer
struct. When auto purging buffers, look at this instead of comparing
cookies.
Buffer consumers are expected to set ‘locked’ while they hold a
reference to it, and don’t want it destroyed behind their back.
There’s a chance the resize timeout FD was closed, and *reused*, after
epoll() told us the FD is readable, but before our callback runs.
Thus, closing the FD provided as an argument is dangerous, as it may
refer to something completely different.
A narrow, but offset:ed glyph should still be considered double
width.
This patch also fixes a crash, when the maybe-double width glyph is in
the last column. This is a regression.
The previous implementation stored compose chains in a dynamically
allocated array. Adding a chain was easy: resize the array and append
the new chain at the end. Looking up a compose chain given a compose
chain key/index was also easy: just index into the array.
However, searching for a pre-existing chain given a codepoint sequence
was very slow. Since the array wasn’t sorted, we typically had to scan
through the entire array, just to realize that there is no
pre-existing chain, and that we need to add a new one.
Since this happens for *each* codepoint in a grapheme cluster, things
quickly became really slow.
Things were ok:ish as long as the compose chain struct was small, as
that made it possible to hold all the chains in the cache. Once the
number of chains reached a certain point, or when we were forced to
bump maximum number of allowed codepoints in a chain, we started
thrashing the cache and things got much much worse.
So what can we do?
We can’t sort the array, because
a) that would invalidate all existing chain keys in the grid (and
iterating the entire scrollback and updating compose keys is *not* an
option).
b) inserting a chain becomes slow as we need to first find _where_ to
insert it, and then memmove() the rest of the array.
This patch uses a binary search tree to store the chains instead of a
simple array.
The tree is sorted on a “key”, which is the XOR of all codepoints,
truncated to the CELL_COMB_CHARS_HI-CELL_COMB_CHARS_LO range.
The grid now stores CELL_COMB_CHARS_LO+key, instead of
CELL_COMB_CHARS_LO+index.
Since the key is truncated, collisions may occur. This is handled by
incrementing the key by 1.
Lookup is of course slower than before, O(log n) instead of
O(1).
Insertion is slightly slower as well: technically it’s O(log n)
instead of O(1). However, we also need to take into account the
re-allocating the array will occasionally force a full copy of the
array when it cannot simply be growed.
But finding a pre-existing chain is now *much* faster: O(log n)
instead of O(n). In most cases, the first lookup will either
succeed (return a true match), or fail (return NULL). However, since
key collisions are possible, it may also return false matches. This
means we need to verify the contents of the chain before deciding to
use it instead of inserting a new chain. But remember that this
comparison was being done for each and every chain in the previous
implementation.
With lookups being much faster, and in particular, no longer requiring
us to check the chain contents for every singlec chain, we can now use
a dynamically allocated ‘chars’ array in the chain. This was
previously a hardcoded array of 10 chars.
Using a dynamic allocated array means looking in the array is slower,
since we now need two loads: one to load the pointer, and a second to
load _from_ the pointer.
As a result, the base size of a compose chain (i.e. an “empty” chain)
has now been reduced from 48 bytes to 32. A chain with two codepoints
is 40 bytes. This means we have up to 4 codepoints while still using
less, or the same amount, of memory as before.
Furthermore, the Unicode random test (i.e. write random “unicode”
chars) is now **faster** than current master (i.e. before text-shaping
support was added), **with** test-shaping enabled. With text-shaping
disabled, we’re _even_ faster.
fcft’s view of how many columns a grapheme cluster is may differ from
our own. Make sure the rendered glyph matches the number of columns
that were allocated when the cluster was printed.
The configure event asks the client to change its decoration
mode. The configured state should not be applied immediately.
Clients must send an ack_configure in response to this event.
See xdg_surface.configure and xdg_surface.ack_configure for
details.
In particular, ”the configured state should *not* be applied
immediately”.
Instead, treat CSD/SSD changes like all other window dimension related
changes: store the to-be mode in win->configure, and apply it in the
surface configure event.
This fixes an issue where foot incorrectly resized the window when the
server switched between CSD/SSD at run-time.
This option controls the foreground color of the
minimize/maximize/close buttons. I.e. the color used to draw the
minimize/maximize/close glyphs.
It defaults to default background color.
Using the frame callback works most of the time, but e.g. Sway doesn’t
call it while the window is hidden, and thus prevents us from updating
the title in e.g. stacked views.
This patch uses a timer FD instead. We store a timestamp from when the
title was last updated. When the application wants to update the
title, we first check if we already have a timer running, and if so,
does nothing.
If no timer is running, check the timestamp. If enough time has
passed, update the title immediately.
If not, instantiate a timer and wait for it to trigger.
Set the minimum time between two updates to ~8ms (twice per frame, for
a 60Hz output, and ~once per frame on a 120Hz output).
Closes#591
TAB (\t) move the cursor to the next tab stop. That’s it, according to
the specification.
However, many terminal emulators try to keep tabs in the grid, to be
able to e.g. copy them. That is, copying a text chunk containing tabs
should result in tabs being pasted, not spaces.
In order to do that, we need to print a tab character to the grid. To
improve text reflow of tabs, we also print spaces to the subsequent
cells, up until (but not including) the next tab stop.
However, we can only do this if all the cells between the cursor and
the next tab stop are empty, since (obviously), we cannot overwrite
pre-existing characters.
Finally, while some fonts render tabs as spaces (i.e. an empty glyph),
some use a glyph representing “unprintable” characters, or
similar. Thus, we need to exclude cells with tab characters when
rendering.
This matches XTerm behavior, and fixes vttest 11.6.6.2:
Test non-VT100 ->
Test ISO-6429 colors ->
Test of VT102-style features with BCE ->
Test of screen features
When enabled, PUA (Private Usage Area) codepoints are always treated
as double-width glyphs, regardless of the actual glyph width.
Requires allow-overflowing-double-width-glyphs=yes
Works in pretty much the same way as ‘beam-thickness’, except that the
default value is “the font’s underline thickness”.
This means, that when unset, the cursor underline thickness scales
with the font size.
But, when explicitly set, either to a point size value, or a pixel
size, it remains fixed at that size.
Closes#524