render: fix rounding error when calculating background color with alpha

We use pre-multiplied alpha color channels, but were having bad
rounding errors due to the alpha divider being truncated to an
integer.

The algorithm for pre-multiplying a color channel is:

  alpha_divider = 0xffff / alpha
  pre_mult_color = color / alpha_divider

In order to fix the rounding errors, we could turn ‘alpha_divider’
into a double.

That however would introduce a performance penalty since now we’d need
to do floating point math for each cell.

The algorithm can be trivially converted to:

  pre_mult_color = color * alpha / 0xffff

Since both color and alpa values are < 65536, the multiplication is
“safe”; it will not overflow an uint32_t.

Closes #249
This commit is contained in:
Daniel Eklöf 2020-12-20 12:25:12 +01:00
parent 35e28fc503
commit 339acc57cf
No known key found for this signature in database
GPG key ID: 5BBD4992C116573F
2 changed files with 5 additions and 7 deletions

View file

@ -28,6 +28,8 @@
* Missing dependencies in meson, causing heavily parallelized builds
to fail.
* Background color when alpha < 1.0 being wrong
(https://codeberg.org/dnkl/foot/issues/249).
### Security

View file

@ -213,14 +213,10 @@ attrs_to_font(const struct terminal *term, const struct attributes *attrs)
static inline pixman_color_t
color_hex_to_pixman_with_alpha(uint32_t color, uint16_t alpha)
{
if (alpha == 0)
return (pixman_color_t){0, 0, 0, 0};
int alpha_div = 0xffff / alpha;
return (pixman_color_t){
.red = ((color >> 16 & 0xff) | (color >> 8 & 0xff00)) / alpha_div,
.green = ((color >> 8 & 0xff) | (color >> 0 & 0xff00)) / alpha_div,
.blue = ((color >> 0 & 0xff) | (color << 8 & 0xff00)) / alpha_div,
.red = ((color >> 16 & 0xff) | (color >> 8 & 0xff00)) * alpha / 0xffff,
.green = ((color >> 8 & 0xff) | (color >> 0 & 0xff00)) * alpha / 0xffff,
.blue = ((color >> 0 & 0xff) | (color << 8 & 0xff00)) * alpha / 0xffff,
.alpha = alpha,
};
}