Since foot is pretty aggressive about spawning the client early, it
was possible for a fast client to read a 0x0 terminal size. Not all
clients coped well.
Closes#20.
When scrolling, there are a couple of cases where an existing
selection must be canceled because we cannot meaningfully represent it
after scrolling.
These are when the selection is (partly) inside:
* The top scrolling region
* The bottom scrolling region
* The new lines scrolled in. I.e. re-used lines
For the scrolling regions, the real problem is when the selection
crosses the scrolling region boundary; a selection that is completely
inside a scrolling regions _might_ be possible to keep, but we would
need to translate the selection coordinates to the new scrolling
region lines.
For simplicity, we cancel the selection if it touches the scrolling
region. Period.
The last item, newly scrolled in lines is when the selection covers
very old lines and we're now wrapping around the scrollback history.
Then there's a fourth problem case: when the user has started a
selection, but hasn't yet moved the cursor. In this case, we have no
end point.
What's more problematic is that when the user (after scrolling) moves
the cursor, we try to create a huge selection that covers mostly
empty (NULL) rows, causing us to crash.
This can happen e.g. when reverse scrolling in such a way that we wrap
around the scrollback history.
The actual viewport in this case is something like `-n - m`. But the
selection we'll end up trying to create will be `m - (rows - n)`. This
range may very well contain NULL rows.
To deal with this, we simply cancel the selection.
We normally don't clear the selection when scrolling. The exception is
when the selection covers re-used rows. I.e. rows that we scroll in
and clear.
In this case we cancel the selection (we _could_ modify it and keep as
much as possible and only remove the re-used rows...). We must do
this *before* scrolling, since scrolling will swap rows (when there's
a scrolling region). When this happens, the selection is "corrupted",
and canceling it afterwards will not work.
Instead of storing combining data per cell, realize that most
combinations are re-occurring and that there's lots of available space
left in the unicode range, and store seen base+combining combinations
chains in a per-terminal array.
When we encounter a combining character, we first try to pre-compose,
like before. If that fails, we then search for the current
base+combining combo in the list of previously seen combinations. If
not found there either, we allocate a new combo and add it to the
list. Regardless, the result is an index into this array. We store
this index, offsetted by COMB_CHARS_LO=0x40000000ul in the cell.
When rendering, we need to check if the cell character is a plain
character, or if it's a composed character (identified by checking if
the cell character is >= COMB_CHARS_LO).
Then we render the grapheme pretty much like before.
We only used utf8proc to try to pre-compose a glyph from a base and
combining character.
We can do this ourselves by using a pre-compiled table of valid
pre-compositions. This table isn't _that_ big, and binary searching it
is fast.
That is, for a very small amount of code, and not too much extra RO
data, we can get rid of the utf8proc dependency.
This is basically just a loop that renders additional glyphs on top
off the base glyph.
Since we now need access to the row struct, for the combining
characters, the function prototype for 'render_cell()' has changed.
When rendering the combining characters we also need to deal with
broken fonts; some fonts use positive offsets (as if the combining
character was a regular character), while others use a negative
offset (to be used as if you had already advanced the pen position).
We handle both - a positive offset is rendered just like a regular
glyph. When we see a negative offset we simply add the width of a cell
first.
This is the *only* place combining characters are reset. In
particular, we do *not* reset them in a normal cell erase.
This is a performance design decision - clearing the combining
characters in erase is way too slow.
This way, we clear it only when we *have* to. Anything looking at the
combining characters must first ensure the base character is not 0.
The way things works right now, we cannot enable the ptmx FDM callback
right away. We need to wait for the Wayland window to have been
configured.
Before the window is configured, we don't have a size, and no
grid. Thus, if we try to process ptmx data we'll crash since we have
no where to write it to.
So, registering the ptmx fd with the FDM is now delayed until we've
received the first 'configure' event from Wayland.
Use four threads to load the four primary font variants - normal,
bold, italic and bold italic.
This speeds up initial startup, and reloading of fonts on a DPI
change.
At this point, we're not mapped, but we should have all the outputs
initialized. Which means we can at least guess which subpixel mode to
use.
If that turns out to be wrong, something we'll detect when we're
mapped, we'll just have to re-render.
These lists are typically empty when we destroy the terminal. However,
if we had queued up damage, and then manage to destroy the terminal
instance before the last changes were rendered, then they will *not*
be empty.
Found by the address sanitizer.
In reset, we allocated new rows for all the currently visible
lines. We did **not** however, free the 'old' rows.
Fix by not explicitly allocating new rows, but instead allocating
uninitialized rows when needed, and then explicitly erasing the row.
If there already was a row allocated, it is simply erased. If there
wasn't, the a new line is malloc:ed, and then erased.
We now create a copy of the config for each client, and updates it
with the values passed from the client.
Since we're not actually cloning it (and e.g. strdup() all strings
etc) we can't call conf_destroy() to free it, but need to free just
the strings we've replaced.