config: tweak.surface-bit-depth: add 16-bit

This adds support for 16-bit (integer) surfaces. The corresponding
pixman type (PIXMAN_a16b16g16r16) was added in 0.46.0.

If the new 16-bit type is requested, but not supported by the
compositor, fallback to 16f-bit, and if that fails, 10-bit (and
finally 8-bit).
This commit is contained in:
Daniel Eklöf 2025-05-01 11:55:21 +02:00
parent 429a922723
commit 5dbf5ea89d
No known key found for this signature in database
GPG key ID: 5BBD4992C116573F
8 changed files with 60 additions and 16 deletions

View file

@ -2024,7 +2024,7 @@ any of these options.
*surface-bit-depth*
Selects which RGB bit depth to use for image buffers. One of
*auto*, *8-bit*, *10-bit*, or *16f-bit*.
*auto*, *8-bit*, *10-bit*, *16-bit*, or *16f-bit*.
*auto* chooses bit depth depending on other settings, and
availability.
@ -2036,11 +2036,12 @@ any of these options.
alpha channel. Thus, it provides higher precision color channels,
but a lower precision alpha channel.
*16f-bit* uses 16 bits (floating point) for each color channel,
alpha included. If available, this is the default when
*gamma-correct-blending=yes*.
*16-bit* and *16f-bit* uses 16 bits (with *16f-bit* being floating
point) for each color channel, alpha included. If available, this
is the default when *gamma-correct-blending=yes* (with *16-bit*
being preferred over *16f-bit*).
Note that both *10-bit* and *16f-bit* are much slower than
Note that both *10-bit*, *16-bit* and *16f-bit* are much slower than
*8-bit*; if you want to use gamma-correct blending, and if you
prefer speed (throughput and input latency) over accurate colors,
you can set *surface-bit-depth=8-bit* explicitly.