Fcft no longer uses wchar_t, but plain uint32_t to represent
codepoints.
Since we do a fair amount of string operations in foot, it still makes
sense to use something that actually _is_ a string (or character),
rather than an array of uint32_t.
For this reason, we switch out all wchar_t usage in foot to
char32_t. We also verify, at compile-time, that char32_t used
UTF-32 (which is what fcft expects).
Unfortunately, there are no string functions for char32_t. To avoid
having to re-implement all wcs*() functions, we add a small wrapper
layer of c32*() functions.
These wrapper functions take char32_t arguments, but then simply call
the corresponding wcs*() function.
For this to work, wcs*() must _also_ be UTF-32 compatible. We can
check for the presence of the __STDC_ISO_10646__ macro. If set,
wchar_t is at least 4 bytes and its internal representation is UTF-32.
FreeBSD does *not* define this macro, because its internal wchar_t
representation depends on the current locale. It _does_ use UTF-32
_if_ the current locale is UTF-8.
Since foot enforces UTF-8, we simply need to check if __FreeBSD__ is
defined.
Other fcft API changes:
* fcft_glyph_rasterize() -> fcft_codepoint_rasterize()
* font.space_advance has been removed
* ‘tags’ have been removed from fcft_grapheme_rasterize()
* ‘fcft_log_init()’ removed
* ‘fcft_init()’ and ‘fcft_fini()’ must be explicitly called
POSIX.1-2008 has marked gettimeofday(2) as obsolete, recommending the
use of clock_gettime(2) instead.
CLOCK_MONOTONIC has been used instead of CLOCK_REALTIME because it is
unaffected by manual changes in the system clock. This makes it better
for our purposes, namely, measuring the difference between two points in
time.
tv_sec has been casted to long in most places since POSIX does not
define the actual type of time_t.
When using indexed colors (i.e. SGR 30/40/90/100), store the index
into the cell’s fg/bg attributes, not the actual color value.
This has a couple of consequences:
Color table lookup is now done when rendering. This means a rendered
cell will always reflect the *current* color table, not the color
table that was in use when the cell was printed to.
This simplifies the OSC-4/104 logic, since we no longer need to update
the grid - we just have to damage it to trigger rendering.
Furthermore, this change simplifies the VT parsing, since we no longer
need to do any memory loads (except loading the SGR parameter values),
only writes.
CSD borders are always *at least* 5px. If url.border-width=0, those
5px are all fully transparent (and act as interactive resize handles).
As csd.border-width increases, the number of transparent pixels
decrease. Once csd.border-width >= 5, the border is fully opaque.
When csd.border-width > 5, then width of the border is (obviously)
more than 5px. But, when rendering the opaque part of the border, we
still used 5px for the invisible part, which caused some pixman
rectangles to have negative x/y coordinates.
This resulted in rendering glitches due to overflows in pixman when
rendering the borders.
The fix is to ensure the total border size is always at least, but
not *always* 5px. That is, set it to max(5, csd.border-width).
This patch also fixes an issue where the CSD borders were not
dimmed (like the titlebar) when the window looses input focus.
Closes#823
Each cell now tracks it’s current color source:
* default fg/bg
* base16 fg/bg (maps to *both* the regular and bright colors)
* base256 fg/bg
* RGB
Note that we don’t have enough bits to separate the regular from the
bright colors. These _shouldn’t_ be the same, so we ought to be
fine...
This allows you to configure custom colors to be used when colors are
being dimmed (`\E[2m`).
It is implemented by color matching (just like
bold-text-in-bright=palette-based); the color-to-be-dimmed is matched
against the current color palette.
If it matches one of the regular colors (colors 0-7), the
corresponding “dim” color will be used.
If it matches one of the bright colors (colors 8-15), the
corresponding “regular” color will be used (but *only* if the “dim”
color has been set).
Otherwise, the color is dimmed by reducing its luminance.
The default behavior, i.e. when dim0-7 hasn’t been configured, is to
dim by reducing luminance for *all* colors. I.e. we don’t do any color
matching at all. In particular, this means that dimming a bright color
will *not* result in the corresponding “regular” color.
Closes#776
When we’re using CSDs, we’ve up until now rendered a 5px invisible
border. This border handles interactive resizing. I.e. hovering it
changes the mouse cursor, and mouse button events are used to start an
interactive resize.
This patch makes it possible to color part of (or the entire) border,
with a configurable color.
To facilitate this, two new options have been added:
* csd.border-width
* csd.border-color
border-width defaults to 0, resulting in the look we’re used to.
border-color defaults to the title bar color. If the title bar color
hasn’t been set, it defaults to the default foreground color (just
like the title bar color does).
This means that, setting border-width but not border-color, results in
a border that blends with the title bar.
The box_drawings array is now quite large, and uses up ~4K
when *empty*.
This patch splits it up into three separate, dynamically allocated
arrays; one for the traditional box+line drawing and block elements
glyphs, one for braille, and one for the legacy computing symbols.
When we need to render a glyph, the *entire* array (that it belongs
to) is allocated.
I.e this is one step closer to a dynamic glyph cache (like the one
fcft uses), but doesn’t go all the way.
This is especially nice for people with
‘box-drawings-uses-font-glyphs=yes’; for them, the custom glyphs now
uses 3*8 bytes (for the three array pointers), instead of 4K.
Render braille ourselves, instead of using font glyphs. Decoding a
braille character is easy enough; there are 256 codepoints,
represented by an 8-bit integer (i.e. subtract the Unicode codepoint
offset, 0x2800, and you’re left with an integer in the range 0-255).
Each bit corresponds to a dot. The first 6 bits represent the upper 6
dots, while the two last bits represent the fourth (and last) row of
dots.
The hard part is sizing the dots and the spacing between them.
The aim is to have the spacing between the dots be the same size as
the dots themselves, and to have the margins on each side be half the
size of the dots.
In a perfectly sized cell, this means two braille characters next to
each other will be evenly spaced.
This is however almost never the case. The layout logic currently:
* Set dot size to either the width / 4, or height / 8, depending on
which one is smallest.
* Horizontal spacing is initialized to the width / 4
* Vertical spacing is initialized to the height / 8
* Horizontal margins are initialized to the horizontal spacing / 2
* Vertical margins are initialized to the vertical spacing / 2.
Next, we calculate the number of “remaining” pixels. That is, if we
add the left margin, two dots and the spacing between, how many pixels
are left on the horizontal axis?
These pixels are distributed in the following order (we “stop” as soon
as we run out of pixels):
* If the dot size is 0 (happens for very small font sizes), increase
it to 1.
* If the margins are 0, increase them to 1.
* If we have enough pixels (need at 2 horizontal and 4 vertical),
increase the dot size.
* Increase spacing.
* Increase margins.
Closes#702
When updating the selection (i.e when changing it - adding or removing
cells to the selection), we need to do two things:
* Unset the ‘selected’ bit on all cells that are no longer selected.
* Set the ‘selected’ bit on all cells that *are* selected.
Since it’s quite tricky to calculate the difference between the “old”
and “new” selection, this is done by first un-selecting the old
selection, and then selecting the new, updated selection. I.e. first
we clear the ‘selected’ bit from *all* cells, and then we re-set it on
those cells that are still selected.
This process also dirties the cells, to make sure they are
re-rendered (needed to reflect their new selected/un-selected status).
To avoid dirtying *all* previously selected, and newly selected cells,
we have used an algorithm that first runs a “pre-pass”, marking all
cells that *will* be selected as such. The un-select pass would then
skip (no dirty) cells that have been marked by the pre-pass. Finally,
the select pass would only dirty cells that have *not* been marked by
the pre-pass.
In short, we only dirty cells whose selection state have *changed*.
To do this, we used a second ‘selected’ bit in the cell attribute
struct.
Those bits are *scarce*.
This patch implements an alternative algorithm, that frees up one of
the two ‘selected’ bits.
This is done by lazy allocating a bitmask for the entire grid. The
pre-pass sets bits in the bitmask. Thus, after the pre-pass, the
bitmask has set bits for all cells that *will* be selected.
The un-select pass simply skips cells with a one-bit in the
bitmask. Cells without a one-bit in the bitmask are dirtied, and their
‘selected’ bit is cleared.
The select-pass doesn’t even have to look at the bitmask - if the cell
already has its ‘selected’ bit set, it does nothing. Otherwise it sets
it and dirties the cell.
The bitmask is implemented as an array of arrays of 64-bit
integers. Each outer element represents one row. These pointers are
calloc():ed before starting the pre-pass.
The pre-pass allocates the inner arrays on demand.
The unselect pass is designed to handle both the complete absence of a
bitmask, as well as row entries being NULL (both means the cell
is *not* pre-marked, and will thus be dirtied).
This fixes an issue where the left-most column of a sixel was
“overwritten” by the cell content.
This patch also rewrites the prepass logic, to try to reduce the
number of loads performed.
The new logic loops each row from left to right, looking for dirty
cells. When a dirty cell is found, we first scan backwards, until we
find a non-overflowing cell. That cell is unaffected by the
overflowing cell we’re currently dealing with.
We can also stop as soon as we see a dirty cell, since that cell will
already have been dealt with.
Then, we scan forward, dirtying cells until we see a non-overflowing
cell. That first non-overflowing cell is also dirtied, but after that
we break.
The last loop, that scans forward, advances the same cell pointer used
in the outer loop.
When the foot window is closed, and we need to terminate the client application,
do this in an asynchronous fashion:
* Don’t do a blocking call to waitpid(), instead, rely on the reaper callback
* Use a timer FD to implement the timeout before sending SIGKILL (instead of
using SIGALRM).
* Send SIGTERM immediately (we used to *just* close the PTY, and then wait 2
seconds before sending SIGTERM).
* Raise the timeout from 2 seconds to 60
Full shutdown now depends on *two* asynchronous tasks - unmapping the window,
and waiting for the client application to terminate.
Only when *both* of these have completed do we proceed and call term_destroy(),
and the user provided shutdown callback.
We still use the primary font, but use a custom size, based on the
title bar’s height.
This fixes an issue where the window title could be way too small, or
way too big. And changed size when the terminal font size was changed.
Set clip region in render_osd(). This ensures we don’t step outside
the pixman buffer when rendering the glyphs.
Furthermore, don’t ignore the alpha channel in the background color.
Move render_osd() to make it visible to the render_csd_*() functions.
Up until now, *all* buffers have been tracked in a single, global
buffer list. We've used 'cookies' to separate buffers from different
contexts (so that shm_get_buffer() doesn't try to re-use e.g. a
search-box buffer for the main grid).
This patch refactors this, and completely removes the global
list.
Instead of cookies, we now use 'chains'. A chain tracks both the
properties to apply to newly created buffers (scrollable, number of
pixman instances to instantiate etc), as well as the instantiated
buffers themselves.
This means there's strictly speaking not much use for shm_fini()
anymore, since its up to the chain owner to call shm_chain_free(),
which will also purge all buffers.
However, since purging a buffer may be deferred, if the buffer is
owned by the compositor at the time of the call to shm_purge() or
shm_chain_free(), we still keep a global 'deferred' list, on to which
deferred buffers are pushed. shm_fini() iterates this list and
destroys the buffers _even_ if they are still owned by the
compositor. This only happens at program termination, and not when
destroying a terminal instance. I.e. closing a window in a “foot
--server” does *not* trigger this.
Each terminal instatiates a number of chains, and these chains are
destroyed when the terminal instance is destroyed. Note that some
buffers may be put on the deferred list, as mentioned above.
The initial ref-count is either 1 or 0, depending on whether the
buffer is supposed to be released "immeidately" (meaning, as soon as
the compositor releases it).
Two new user facing functions have been added: shm_addref() and
shm_unref().
Our renderer now uses these two functions instead of manually setting
and clearing the 'locked' attribute.
shm_unref() will decrement the ref-counter, and destroy the buffer
when the counter reaches zero. Except if the buffer is currently
"busy" (compositor owned), in which case destruction is deferred to
the release event. The buffer is still removed from the list though.