This function "prints" any non-ascii character (i.e. any character
that ends up in the action_utf8_print() function in vt.c) to the
grid. This includes grapheme cluster processing etc.
action_utf8_print() now simply calls this function.
This allows us to re-use the same functionality from other
places (like the text-sizing protocol).
When the client application emits combining characters, for example
multi-codepoint emojis, in insert-mode, we ended up pushing partial
graphemes to the right, for each codepoint, resulting in too many
cells (and with the wrong content) being inserted.
The fix is fairly simple; don't "insert" when appending characters to
an existing grapheme cluster.
This isn't something we can detect easily in print_insert() (it would
require us to do grapheme clustering again). Fortunately, we do have
the required information in action_utf8_print(). So, pass this
information as a boolean to term_print().
Closes#1947
When there's a key collision, we increment the key and check
again. When doing this, we need to ensure the key is withing range,
and wrap around to 0 if the key value is too large.
The things affecting which ASCII printer we use have grown...
Instead of checking everything inside term_update_ascii_printer(), use
a bitfield.
Anything affecting the printer used, must now set a bit in this
bitfield. This makes term_update_ascii_printer() much faster, since
all it needs to do is check if the bitfield is zero or not.
At compile time, build a lookup table from the Unicode data file
'emoji-variation-sequences.txt'.
At run-time, when we detect a VS-15/16 sequence, do a lookup in this
table, and enforce the variation selector iff the sequence is valid.
Closes#1742
This fixes:
a) a compilation error with -Dgrapheme-clustering=disabled
b) ensures U+FE0F does *not* allocate a two cells when grapheme
clustering has been disabled (either compile time, in config, or
run-time).
This implements private mode 2027 - grapheme cluster processing, as
defined in the "Terminal Unicode Core"[1] specification.
Internally, we just flip the already existing option "grapheme
shaping". Since it's now runtime changeable, we need a copy of it in
the terminal struct, rather than referencing the conf object.
[1]: 13fc5a8993/spec/terminal-unicode-core.tex (L50-L53)
../vt.c:648:13: runtime error: signed integer overflow: 3924432811 * 2654435761 cannot be represented in type 'long'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../vt.c:648:13 in
Closes#1456
This patch detects invalid codepoints in the UTF-8 EDxxxx range, and
the F4xxxxxx range.
Note that we still allow the E0xxxx and F0xxxxxx ranges. These
contains overlong encodings. We allow them, because they still decode
into correct UTF-32.
Closes#1423
Do not insert existing positions into the tab stop list.
This prevents a performance issue when iterating through
an extremely long tab stop list.
Also corrects the behaviour of CBT.
We’re already switching on the next VT input byte in the state
machine; no need to if...else if in action_param() too.
That is, split up action_param() into three:
* action_param_new()
* action_param_new_subparam()
* action_param()
This makes the code cleaner, and hopefully slightly faster.
Next, to improve performance further, only check for (sub)parameter
overflow in action_param_new() and action_param_subparam().
Add pointers to the VT struct that points to the currently active
parameter and sub-parameter.
When the number of parameters (or sub-parameters) overflow, warn, and
then point the parameter pointer to a "dummy" value in the VT struct.
This way, we don’t have to check anything in action_param().
* Don’t assume 32 bits when rotating the old key. Use the number of
actual bits available, as determined by CELL_COMB_CHARS_{HI,LO}
* Multiply with magic hash constant
This greatly reduces the number of collisions seen. For example, the
Emoji test file (from the Unicode specification), now has zero
collisions.
Composed characters are stored in a tree structure, using a key as
identifier. The key is calculated from the individual characters that
make up the composed character sequence.
Since the address space for keys is limited, collisions may occur. In
this case, we simply increment the key and try again.
It is theoretically possible to saturate the key space, in which case
we’ll get stuck in an endless loop.
Even if the key space isn’t fully saturated, we fairly easy reach a
point where there are so many collisions for each insertion, that
performance drops significantly.
Since key space is limited (it’s not like a hash table that we can
grow), our only option is to limit the number of collisions. If we
can’t find a slot within a hard code amount of collisions, the
character is simply dropped.
Fcft no longer uses wchar_t, but plain uint32_t to represent
codepoints.
Since we do a fair amount of string operations in foot, it still makes
sense to use something that actually _is_ a string (or character),
rather than an array of uint32_t.
For this reason, we switch out all wchar_t usage in foot to
char32_t. We also verify, at compile-time, that char32_t used
UTF-32 (which is what fcft expects).
Unfortunately, there are no string functions for char32_t. To avoid
having to re-implement all wcs*() functions, we add a small wrapper
layer of c32*() functions.
These wrapper functions take char32_t arguments, but then simply call
the corresponding wcs*() function.
For this to work, wcs*() must _also_ be UTF-32 compatible. We can
check for the presence of the __STDC_ISO_10646__ macro. If set,
wchar_t is at least 4 bytes and its internal representation is UTF-32.
FreeBSD does *not* define this macro, because its internal wchar_t
representation depends on the current locale. It _does_ use UTF-32
_if_ the current locale is UTF-8.
Since foot enforces UTF-8, we simply need to check if __FreeBSD__ is
defined.
Other fcft API changes:
* fcft_glyph_rasterize() -> fcft_codepoint_rasterize()
* font.space_advance has been removed
* ‘tags’ have been removed from fcft_grapheme_rasterize()
* ‘fcft_log_init()’ removed
* ‘fcft_init()’ and ‘fcft_fini()’ must be explicitly called
All emoji graphemes are double-width. Foot doesn’t support non-latin
scripts. Ergo, this should result in the Right Thing, even though
we’re not doing it the Right Way.
Note that we’re now breaking cursor synchronization with nearly all
applications.
But the way I see it, the applications need to be
updated.
The previous implementation stored compose chains in a dynamically
allocated array. Adding a chain was easy: resize the array and append
the new chain at the end. Looking up a compose chain given a compose
chain key/index was also easy: just index into the array.
However, searching for a pre-existing chain given a codepoint sequence
was very slow. Since the array wasn’t sorted, we typically had to scan
through the entire array, just to realize that there is no
pre-existing chain, and that we need to add a new one.
Since this happens for *each* codepoint in a grapheme cluster, things
quickly became really slow.
Things were ok:ish as long as the compose chain struct was small, as
that made it possible to hold all the chains in the cache. Once the
number of chains reached a certain point, or when we were forced to
bump maximum number of allowed codepoints in a chain, we started
thrashing the cache and things got much much worse.
So what can we do?
We can’t sort the array, because
a) that would invalidate all existing chain keys in the grid (and
iterating the entire scrollback and updating compose keys is *not* an
option).
b) inserting a chain becomes slow as we need to first find _where_ to
insert it, and then memmove() the rest of the array.
This patch uses a binary search tree to store the chains instead of a
simple array.
The tree is sorted on a “key”, which is the XOR of all codepoints,
truncated to the CELL_COMB_CHARS_HI-CELL_COMB_CHARS_LO range.
The grid now stores CELL_COMB_CHARS_LO+key, instead of
CELL_COMB_CHARS_LO+index.
Since the key is truncated, collisions may occur. This is handled by
incrementing the key by 1.
Lookup is of course slower than before, O(log n) instead of
O(1).
Insertion is slightly slower as well: technically it’s O(log n)
instead of O(1). However, we also need to take into account the
re-allocating the array will occasionally force a full copy of the
array when it cannot simply be growed.
But finding a pre-existing chain is now *much* faster: O(log n)
instead of O(n). In most cases, the first lookup will either
succeed (return a true match), or fail (return NULL). However, since
key collisions are possible, it may also return false matches. This
means we need to verify the contents of the chain before deciding to
use it instead of inserting a new chain. But remember that this
comparison was being done for each and every chain in the previous
implementation.
With lookups being much faster, and in particular, no longer requiring
us to check the chain contents for every singlec chain, we can now use
a dynamically allocated ‘chars’ array in the chain. This was
previously a hardcoded array of 10 chars.
Using a dynamic allocated array means looking in the array is slower,
since we now need two loads: one to load the pointer, and a second to
load _from_ the pointer.
As a result, the base size of a compose chain (i.e. an “empty” chain)
has now been reduced from 48 bytes to 32. A chain with two codepoints
is 40 bytes. This means we have up to 4 codepoints while still using
less, or the same amount, of memory as before.
Furthermore, the Unicode random test (i.e. write random “unicode”
chars) is now **faster** than current master (i.e. before text-shaping
support was added), **with** test-shaping enabled. With text-shaping
disabled, we’re _even_ faster.
When checking if we already have a compose chain for the current
sequence of characters, don’t search the list from the beginning,
unless we have to.
Taking the following things into consideration:
* New compose chains are always appended at the end of the list
* If the current sequence is 3 or more characters, it *must* consist
of an existing compose chain, plus the new character.
Thus, when searching, start at index 0 if we only have two characters,
since then the base cell originally contained a regular base
character, and not a compose chain. I.e. the new chain may be
_anywhere_ in the chain list.
If however we have a sequence of three or more characters, start at
the index the *base* chain was at. If the chain we’re searching for
exists, it *must* have been added *after* the base chain, and thus
it *must* be located *after* the base chain in the chain list.
We already have all the widths needed to calculate the new one; it’s
the base characters width (base_width), or the previous combining
chain’s width (composed->width) plus the new characters’s
width (width).
This commit also renames the term_set_single_shift_ascii_printer()
function to term_single_shift(), since the former is overly verbose
and not really even accurate.
These sequences are supposed to affect the next printable ASCII
character and then reset to the previous character set, but before
this commit they were behaving like locking shifts.