2012-02-24 11:01:39 +10:00
|
|
|
<?xml version='1.0' encoding='utf-8' ?>
|
|
|
|
|
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
|
|
|
|
|
<!ENTITY % BOOK_ENTITIES SYSTEM "Wayland.ent">
|
|
|
|
|
%BOOK_ENTITIES;
|
|
|
|
|
]>
|
|
|
|
|
<chapter id="chap-Wayland-Architecture">
|
2012-03-29 10:50:13 +10:00
|
|
|
<title>Wayland Architecture</title>
|
|
|
|
|
<section id="sect-Wayland-Architecture-wayland_architecture">
|
|
|
|
|
<title>X vs. Wayland Architecture</title>
|
|
|
|
|
<para>
|
2013-04-04 11:28:57 +10:00
|
|
|
A good way to understand the Wayland architecture
|
2012-03-29 10:50:13 +10:00
|
|
|
and how it is different from X is to follow an event
|
|
|
|
|
from the input device to the point where the change
|
|
|
|
|
it affects appears on screen.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
This is where we are now with X:
|
|
|
|
|
</para>
|
2015-01-28 17:24:05 -08:00
|
|
|
<figure>
|
|
|
|
|
<title>X architecture diagram</title>
|
|
|
|
|
<mediaobjectco>
|
|
|
|
|
<imageobjectco>
|
|
|
|
|
<areaspec id="map1" units="other" otherunits="imagemap">
|
|
|
|
|
<area id="area1_1" linkends="x_flow_1" x_steal="#step_1"/>
|
|
|
|
|
<area id="area1_2" linkends="x_flow_2" x_steal="#step_2"/>
|
|
|
|
|
<area id="area1_3" linkends="x_flow_3" x_steal="#step_3"/>
|
|
|
|
|
<area id="area1_4" linkends="x_flow_4" x_steal="#step_4"/>
|
|
|
|
|
<area id="area1_5" linkends="x_flow_5" x_steal="#step_5"/>
|
|
|
|
|
<area id="area1_6" linkends="x_flow_6" x_steal="#step_6"/>
|
|
|
|
|
</areaspec>
|
|
|
|
|
<imageobject>
|
|
|
|
|
<imagedata fileref="images/x-architecture.png" format="PNG" />
|
|
|
|
|
</imageobject>
|
|
|
|
|
</imageobjectco>
|
|
|
|
|
</mediaobjectco>
|
|
|
|
|
</figure>
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
<orderedlist>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_1">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The kernel gets an event from an input
|
|
|
|
|
device and sends it to X through the evdev
|
|
|
|
|
input driver. The kernel does all the hard
|
|
|
|
|
work here by driving the device and
|
|
|
|
|
translating the different device specific
|
|
|
|
|
event protocols to the linux evdev input
|
|
|
|
|
event standard.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_2">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The X server determines which window the
|
|
|
|
|
event affects and sends it to the clients
|
|
|
|
|
that have selected for the event in question
|
|
|
|
|
on that window. The X server doesn't
|
|
|
|
|
actually know how to do this right, since
|
|
|
|
|
the window location on screen is controlled
|
|
|
|
|
by the compositor and may be transformed in
|
|
|
|
|
a number of ways that the X server doesn't
|
|
|
|
|
understand (scaled down, rotated, wobbling,
|
|
|
|
|
etc).
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_3">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The client looks at the event and decides
|
|
|
|
|
what to do. Often the UI will have to change
|
|
|
|
|
in response to the event - perhaps a check
|
|
|
|
|
box was clicked or the pointer entered a
|
|
|
|
|
button that must be highlighted. Thus the
|
|
|
|
|
client sends a rendering request back to the
|
|
|
|
|
X server.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_4">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
When the X server receives the rendering
|
|
|
|
|
request, it sends it to the driver to let it
|
|
|
|
|
program the hardware to do the rendering.
|
|
|
|
|
The X server also calculates the bounding
|
|
|
|
|
region of the rendering, and sends that to
|
|
|
|
|
the compositor as a damage event.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_5">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The damage event tells the compositor that
|
|
|
|
|
something changed in the window and that it
|
|
|
|
|
has to recomposite the part of the screen
|
|
|
|
|
where that window is visible. The compositor
|
|
|
|
|
is responsible for rendering the entire
|
|
|
|
|
screen contents based on its scenegraph and
|
|
|
|
|
the contents of the X windows. Yet, it has
|
|
|
|
|
to go through the X server to render this.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="x_flow_6">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The X server receives the rendering requests
|
|
|
|
|
from the compositor and either copies the
|
|
|
|
|
compositor back buffer to the front buffer
|
|
|
|
|
or does a pageflip. In the general case, the
|
|
|
|
|
X server has to do this step so it can
|
|
|
|
|
account for overlapping windows, which may
|
|
|
|
|
require clipping and determine whether or
|
|
|
|
|
not it can page flip. However, for a
|
|
|
|
|
compositor, which is always fullscreen, this
|
|
|
|
|
is another unnecessary context switch.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
</orderedlist>
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
As suggested above, there are a few problems with this
|
|
|
|
|
approach. The X server doesn't have the information to
|
|
|
|
|
decide which window should receive the event, nor can it
|
2016-04-28 12:01:33 -05:00
|
|
|
transform the screen coordinates to window-local
|
2012-03-29 10:50:13 +10:00
|
|
|
coordinates. And even though X has handed responsibility for
|
|
|
|
|
the final painting of the screen to the compositing manager,
|
|
|
|
|
X still controls the front buffer and modesetting. Most of
|
|
|
|
|
the complexity that the X server used to handle is now
|
|
|
|
|
available in the kernel or self contained libraries (KMS,
|
|
|
|
|
evdev, mesa, fontconfig, freetype, cairo, Qt etc). In
|
|
|
|
|
general, the X server is now just a middle man that
|
|
|
|
|
introduces an extra step between applications and the
|
|
|
|
|
compositor and an extra step between the compositor and the
|
|
|
|
|
hardware.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
2013-04-04 11:28:57 +10:00
|
|
|
In Wayland the compositor is the display server. We transfer
|
|
|
|
|
the control of KMS and evdev to the compositor. The Wayland
|
2012-03-29 10:50:13 +10:00
|
|
|
protocol lets the compositor send the input events directly
|
|
|
|
|
to the clients and lets the client send the damage event
|
|
|
|
|
directly to the compositor:
|
|
|
|
|
</para>
|
2015-01-28 17:24:05 -08:00
|
|
|
<figure>
|
|
|
|
|
<title>Wayland architecture diagram</title>
|
|
|
|
|
<mediaobjectco>
|
|
|
|
|
<imageobjectco>
|
|
|
|
|
<areaspec id="mapB" units="other" otherunits="imagemap">
|
|
|
|
|
<area id="areaB_1" linkends="wayland_flow_1" x_steal="#step_1"/>
|
|
|
|
|
<area id="areaB_2" linkends="wayland_flow_2" x_steal="#step_2"/>
|
|
|
|
|
<area id="areaB_3" linkends="wayland_flow_3" x_steal="#step_3"/>
|
|
|
|
|
<area id="areaB_4" linkends="wayland_flow_4" x_steal="#step_4"/>
|
|
|
|
|
</areaspec>
|
|
|
|
|
<imageobject>
|
|
|
|
|
<imagedata fileref="images/wayland-architecture.png" format="PNG" />
|
|
|
|
|
</imageobject>
|
|
|
|
|
</imageobjectco>
|
|
|
|
|
</mediaobjectco>
|
|
|
|
|
</figure>
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
<orderedlist>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="wayland_flow_1">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The kernel gets an event and sends
|
|
|
|
|
it to the compositor. This
|
|
|
|
|
is similar to the X case, which is
|
|
|
|
|
great, since we get to reuse all the
|
|
|
|
|
input drivers in the kernel.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="wayland_flow_2">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The compositor looks through its
|
|
|
|
|
scenegraph to determine which window
|
|
|
|
|
should receive the event. The
|
|
|
|
|
scenegraph corresponds to what's on
|
|
|
|
|
screen and the compositor
|
|
|
|
|
understands the transformations that
|
|
|
|
|
it may have applied to the elements
|
|
|
|
|
in the scenegraph. Thus, the
|
|
|
|
|
compositor can pick the right window
|
|
|
|
|
and transform the screen coordinates
|
2016-04-28 12:01:33 -05:00
|
|
|
to window-local coordinates, by
|
2012-03-29 10:50:13 +10:00
|
|
|
applying the inverse
|
|
|
|
|
transformations. The types of
|
|
|
|
|
transformation that can be applied
|
|
|
|
|
to a window is only restricted to
|
|
|
|
|
what the compositor can do, as long
|
|
|
|
|
as it can compute the inverse
|
|
|
|
|
transformation for the input events.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="wayland_flow_3">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
As in the X case, when the client
|
|
|
|
|
receives the event, it updates the
|
2013-04-04 11:28:57 +10:00
|
|
|
UI in response. But in the Wayland
|
2012-03-29 10:50:13 +10:00
|
|
|
case, the rendering happens in the
|
|
|
|
|
client, and the client just sends a
|
|
|
|
|
request to the compositor to
|
|
|
|
|
indicate the region that was
|
|
|
|
|
updated.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2015-01-28 17:24:05 -08:00
|
|
|
<listitem id="wayland_flow_4">
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
The compositor collects damage
|
|
|
|
|
requests from its clients and then
|
|
|
|
|
recomposites the screen. The
|
|
|
|
|
compositor can then directly issue
|
|
|
|
|
an ioctl to schedule a pageflip with
|
|
|
|
|
KMS.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
2012-02-24 11:01:39 +10:00
|
|
|
|
|
|
|
|
|
2012-03-29 10:50:13 +10:00
|
|
|
</orderedlist>
|
|
|
|
|
</para>
|
|
|
|
|
</section>
|
|
|
|
|
<section id="sect-Wayland-Architecture-wayland_rendering">
|
|
|
|
|
<title>Wayland Rendering</title>
|
|
|
|
|
<para>
|
|
|
|
|
One of the details I left out in the above overview
|
2013-04-04 11:28:57 +10:00
|
|
|
is how clients actually render under Wayland. By
|
2012-03-29 10:50:13 +10:00
|
|
|
removing the X server from the picture we also
|
|
|
|
|
removed the mechanism by which X clients typically
|
|
|
|
|
render. But there's another mechanism that we're
|
|
|
|
|
already using with DRI2 under X: direct rendering.
|
|
|
|
|
With direct rendering, the client and the server
|
|
|
|
|
share a video memory buffer. The client links to a
|
|
|
|
|
rendering library such as OpenGL that knows how to
|
|
|
|
|
program the hardware and renders directly into the
|
|
|
|
|
buffer. The compositor in turn can take the buffer
|
|
|
|
|
and use it as a texture when it composites the
|
|
|
|
|
desktop. After the initial setup, the client only
|
|
|
|
|
needs to tell the compositor which buffer to use and
|
|
|
|
|
when and where it has rendered new content into it.
|
|
|
|
|
</para>
|
2012-02-24 11:01:39 +10:00
|
|
|
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
|
|
|
|
This leaves an application with two ways to update its window contents:
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
<orderedlist>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
Render the new content into a new buffer and tell the compositor
|
|
|
|
|
to use that instead of the old buffer. The application can
|
|
|
|
|
allocate a new buffer every time it needs to update the window
|
|
|
|
|
contents or it can keep two (or more) buffers around and cycle
|
|
|
|
|
between them. The buffer management is entirely under
|
|
|
|
|
application control.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
Render the new content into the buffer that it previously
|
|
|
|
|
told the compositor to to use. While it's possible to just
|
|
|
|
|
render directly into the buffer shared with the compositor,
|
|
|
|
|
this might race with the compositor. What can happen is that
|
|
|
|
|
repainting the window contents could be interrupted by the
|
|
|
|
|
compositor repainting the desktop. If the application gets
|
|
|
|
|
interrupted just after clearing the window but before
|
|
|
|
|
rendering the contents, the compositor will texture from a
|
|
|
|
|
blank buffer. The result is that the application window will
|
|
|
|
|
flicker between a blank window or half-rendered content. The
|
|
|
|
|
traditional way to avoid this is to render the new content
|
|
|
|
|
into a back buffer and then copy from there into the
|
|
|
|
|
compositor surface. The back buffer can be allocated on the
|
|
|
|
|
fly and just big enough to hold the new content, or the
|
|
|
|
|
application can keep a buffer around. Again, this is under
|
|
|
|
|
application control.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
</orderedlist>
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
In either case, the application must tell the compositor
|
|
|
|
|
which area of the surface holds new contents. When the
|
2012-08-11 15:35:47 +01:00
|
|
|
application renders directly to the shared buffer, the
|
2012-03-29 10:50:13 +10:00
|
|
|
compositor needs to be noticed that there is new content.
|
|
|
|
|
But also when exchanging buffers, the compositor doesn't
|
|
|
|
|
assume anything changed, and needs a request from the
|
|
|
|
|
application before it will repaint the desktop. The idea
|
|
|
|
|
that even if an application passes a new buffer to the
|
|
|
|
|
compositor, only a small part of the buffer may be
|
|
|
|
|
different, like a blinking cursor or a spinner.
|
|
|
|
|
</para>
|
2012-11-06 17:17:01 -02:00
|
|
|
</section>
|
|
|
|
|
<section id="sect-Wayland-Architecture-wayland_hw_enabling">
|
2025-09-02 14:09:55 +02:00
|
|
|
<title>Accelerated GPU Buffer Exchange</title>
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
2025-09-02 14:09:55 +02:00
|
|
|
Clients
|
|
|
|
|
<ulink url="https://docs.kernel.org/userspace-api/dma-buf-alloc-exchange.html">exchange</ulink>
|
|
|
|
|
GPU buffers with the compositor as dma-buf file descriptors, which are universal handles
|
|
|
|
|
that are independent of any particular rendering API or memory allocator. The
|
|
|
|
|
<ulink url="https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/stable/linux-dmabuf/linux-dmabuf-v1.xml">linux-dmabuf-v1</ulink>
|
|
|
|
|
protocol is used to turn one or more dma-buf FDs into a
|
|
|
|
|
<link linkend="protocol-spec-wl_buffer">wl_buffer</link>.
|
2012-03-29 10:50:13 +10:00
|
|
|
</para>
|
|
|
|
|
<para>
|
2025-09-02 14:09:55 +02:00
|
|
|
If the client uses the
|
|
|
|
|
<ulink url="https://docs.vulkan.org/spec/latest/chapters/VK_KHR_surface/wsi.html">Vulkan</ulink>
|
|
|
|
|
or
|
|
|
|
|
<ulink url="https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_platform_wayland.txt">EGL</ulink>
|
|
|
|
|
(via
|
|
|
|
|
<ulink url="https://gitlab.freedesktop.org/wayland/wayland/-/tree/main/egl">wayland-egl</ulink>)
|
|
|
|
|
window-system integration
|
|
|
|
|
(WSI), this is done transparently by the WSI.
|
2012-03-29 10:50:13 +10:00
|
|
|
</para>
|
|
|
|
|
<para>
|
2025-09-02 14:09:55 +02:00
|
|
|
Clients can alternatively allocate and import dma-bufs themselves
|
|
|
|
|
using the GBM library, Vulkan, udmabuf, or dma-buf heaps.
|
2012-03-29 10:50:13 +10:00
|
|
|
</para>
|
2025-09-02 14:09:55 +02:00
|
|
|
<itemizedlist>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
Using GBM, the client can allocate a gbm_bo and export one or more
|
|
|
|
|
dma-buf FDs from it.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
Using Vulkan, the client can create a VkDeviceMemory object and use
|
|
|
|
|
<ulink url="https://docs.vulkan.org/refpages/latest/refpages/source/VK_EXT_external_memory_dma_buf.html">VK_EXT_external_memory_dma_buf</ulink>
|
|
|
|
|
and
|
|
|
|
|
<ulink url="https://docs.vulkan.org/refpages/latest/refpages/source/VK_EXT_image_drm_format_modifier.html">VK_EXT_image_drm_format_modifier</ulink>
|
|
|
|
|
to export a dma-buf FD from it.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
<ulink url="https://lwn.net/Articles/749206/">udmabuf</ulink>
|
|
|
|
|
can be used to create dma-buf FDs from linear host memory.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
<listitem>
|
|
|
|
|
<para>
|
|
|
|
|
<ulink url="https://docs.kernel.org/userspace-api/dma-buf-heaps.html">Dma-buf heaps</ulink>
|
|
|
|
|
can be used by privileged applications to create dma-buf FDs on embedded
|
|
|
|
|
devices.
|
|
|
|
|
</para>
|
|
|
|
|
</listitem>
|
|
|
|
|
</itemizedlist>
|
2012-03-29 10:50:13 +10:00
|
|
|
<para>
|
2025-09-02 14:09:55 +02:00
|
|
|
Compositors use
|
|
|
|
|
<ulink url="https://docs.vulkan.org/refpages/latest/refpages/source/VK_EXT_external_memory_dma_buf.html">VK_EXT_external_memory_dma_buf</ulink>
|
|
|
|
|
and
|
|
|
|
|
<ulink url="https://docs.vulkan.org/refpages/latest/refpages/source/VK_EXT_image_drm_format_modifier.html">VK_EXT_image_drm_format_modifier</ulink>
|
|
|
|
|
or
|
|
|
|
|
<ulink url="https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import.txt">EGL_EXT_image_dma_buf_import</ulink>
|
|
|
|
|
and
|
|
|
|
|
<ulink url="https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import_modifiers.txt">EGL_EXT_image_dma_buf_import_modifiers</ulink>
|
|
|
|
|
to import the dma-bufs provided by the client into their own Vulkan or
|
|
|
|
|
EGL renderers.
|
2012-03-29 10:50:13 +10:00
|
|
|
</para>
|
|
|
|
|
<para>
|
2025-09-02 14:09:55 +02:00
|
|
|
Clients do not need to wait for the GPU to finish rendering before submitting
|
|
|
|
|
dma-bufs to the compositor. Clients can use the
|
|
|
|
|
<ulink url="https://gitlab.freedesktop.org/wayland/wayland-protocols/-/blob/main/staging/linux-drm-syncobj/linux-drm-syncobj-v1.xml">linux-drm-syncobj-v1</ulink>
|
|
|
|
|
protocol to exchange DRM synchronization objects with the compositor. These objects
|
|
|
|
|
are used to asynchronously signal ownership transfer of buffers from clients to the
|
|
|
|
|
compositor and vice versa. The WSIs do this transparently.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
If the linux-drm-syncobj-v1 protocol is not supported by the compositor, clients
|
|
|
|
|
and compositors can use the
|
|
|
|
|
<ulink url="https://docs.kernel.org/driver-api/dma-buf.html#c.dma_buf_export_sync_file">DMA_BUF_IOCTL_EXPORT_SYNC_FILE</ulink>
|
|
|
|
|
and
|
|
|
|
|
<ulink url="https://docs.kernel.org/driver-api/dma-buf.html#c.dma_buf_import_sync_file">DMA_BUF_IOCTL_IMPORT_SYNC_FILE</ulink>
|
|
|
|
|
ioctls to access and create implicit synchronization barriers.
|
|
|
|
|
</para>
|
|
|
|
|
</section>
|
|
|
|
|
<section id="sect-Wayland-Architecture-kms">
|
|
|
|
|
<title>Display Programming</title>
|
|
|
|
|
<para>
|
|
|
|
|
Compositors enumerate DRM KMS devices using
|
|
|
|
|
<ulink url="https://en.wikipedia.org/wiki/Udev">udev</ulink>.
|
|
|
|
|
Udev also notifies compositors of KMS device and display hotplug events.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
Access to DRM KMS device ioctls is privileged. Since compositors usually run as
|
|
|
|
|
unprivileged applications, they typically gain access to a privileged file
|
|
|
|
|
descriptor using the
|
|
|
|
|
<ulink url="https://www.freedesktop.org/software/systemd/man/latest/org.freedesktop.login1.html#Session%20Objects">TakeDevice</ulink>
|
|
|
|
|
method provided by logind.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
Using the file descriptor, compositors use KMS
|
|
|
|
|
<ulink url="https://docs.kernel.org/gpu/drm-kms.html">ioctls</ulink>
|
|
|
|
|
to enumerate the available displays.
|
|
|
|
|
</para>
|
|
|
|
|
<para>
|
|
|
|
|
Compositors use
|
|
|
|
|
<ulink url="https://docs.kernel.org/gpu/drm-kms.html#atomic-mode-setting">atomic mode setting</ulink>
|
|
|
|
|
to change the buffer shown by the display, to change the display's resolution, to
|
|
|
|
|
enable or disable HDR, and so on.
|
2012-03-29 10:50:13 +10:00
|
|
|
</para>
|
|
|
|
|
</section>
|
2012-02-24 11:01:39 +10:00
|
|
|
</chapter>
|