2008-12-02 15:15:01 -05:00
|
|
|
/*
|
2012-10-04 16:54:22 -04:00
|
|
|
* Copyright © 2008-2012 Kristian Høgsberg
|
|
|
|
|
* Copyright © 2010-2012 Intel Corporation
|
2008-12-02 15:15:01 -05:00
|
|
|
*
|
2015-06-10 10:54:15 -07:00
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining
|
|
|
|
|
* a copy of this software and associated documentation files (the
|
|
|
|
|
* "Software"), to deal in the Software without restriction, including
|
|
|
|
|
* without limitation the rights to use, copy, modify, merge, publish,
|
|
|
|
|
* distribute, sublicense, and/or sell copies of the Software, and to
|
|
|
|
|
* permit persons to whom the Software is furnished to do so, subject to
|
|
|
|
|
* the following conditions:
|
2008-12-02 15:15:01 -05:00
|
|
|
*
|
2015-06-10 10:54:15 -07:00
|
|
|
* The above copyright notice and this permission notice (including the
|
|
|
|
|
* next paragraph) shall be included in all copies or substantial
|
|
|
|
|
* portions of the Software.
|
|
|
|
|
*
|
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
|
* SOFTWARE.
|
2008-12-02 15:15:01 -05:00
|
|
|
*/
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
#define _GNU_SOURCE
|
|
|
|
|
|
2008-10-07 10:10:36 -04:00
|
|
|
#include <stdlib.h>
|
|
|
|
|
#include <stdint.h>
|
|
|
|
|
#include <stddef.h>
|
|
|
|
|
#include <stdio.h>
|
2011-07-14 18:56:40 +03:00
|
|
|
#include <stdbool.h>
|
2008-10-07 10:10:36 -04:00
|
|
|
#include <errno.h>
|
|
|
|
|
#include <string.h>
|
|
|
|
|
#include <unistd.h>
|
|
|
|
|
#include <sys/socket.h>
|
|
|
|
|
#include <sys/un.h>
|
|
|
|
|
#include <ctype.h>
|
2008-12-21 21:50:23 -05:00
|
|
|
#include <assert.h>
|
2011-04-11 09:24:11 -04:00
|
|
|
#include <fcntl.h>
|
2014-01-14 18:38:59 +01:00
|
|
|
#include <poll.h>
|
2012-10-04 17:42:49 -04:00
|
|
|
#include <pthread.h>
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2008-11-23 23:41:08 -05:00
|
|
|
#include "wayland-util.h"
|
2012-03-21 11:11:26 +02:00
|
|
|
#include "wayland-os.h"
|
2008-10-08 13:32:07 -04:00
|
|
|
#include "wayland-client.h"
|
2011-11-18 13:46:56 -05:00
|
|
|
#include "wayland-private.h"
|
2008-10-08 13:32:07 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** \cond */
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
enum wl_proxy_flag {
|
|
|
|
|
WL_PROXY_FLAG_ID_DELETED = (1 << 0),
|
|
|
|
|
WL_PROXY_FLAG_DESTROYED = (1 << 1)
|
|
|
|
|
};
|
|
|
|
|
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_proxy {
|
2010-12-01 17:07:41 -05:00
|
|
|
struct wl_object object;
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_display *display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_event_queue *queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
uint32_t flags;
|
|
|
|
|
int refcount;
|
2009-09-18 09:49:21 -04:00
|
|
|
void *user_data;
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_dispatcher_func_t dispatcher;
|
2008-10-07 10:10:36 -04:00
|
|
|
};
|
|
|
|
|
|
2011-04-14 10:38:44 -04:00
|
|
|
struct wl_global {
|
|
|
|
|
uint32_t id;
|
|
|
|
|
char *interface;
|
|
|
|
|
uint32_t version;
|
|
|
|
|
struct wl_list link;
|
|
|
|
|
};
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_event_queue {
|
|
|
|
|
struct wl_list event_list;
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_display *display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
};
|
|
|
|
|
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_display {
|
|
|
|
|
struct wl_proxy proxy;
|
|
|
|
|
struct wl_connection *connection;
|
2014-06-20 09:29:52 +02:00
|
|
|
|
|
|
|
|
/* errno of the last wl_display error */
|
2012-10-11 23:37:42 +02:00
|
|
|
int last_error;
|
2014-06-20 09:29:52 +02:00
|
|
|
|
|
|
|
|
/* When display gets an error event from some object, it stores
|
|
|
|
|
* information about it here, so that client can get this
|
|
|
|
|
* information afterwards */
|
|
|
|
|
struct {
|
|
|
|
|
/* Code of the error. It can be compared to
|
|
|
|
|
* the interface's errors enumeration. */
|
|
|
|
|
uint32_t code;
|
|
|
|
|
/* interface (protocol) in which the error occurred */
|
|
|
|
|
const struct wl_interface *interface;
|
|
|
|
|
/* id of the proxy that caused the error. There's no warranty
|
|
|
|
|
* that the proxy is still valid. It's up to client how it will
|
|
|
|
|
* use it */
|
|
|
|
|
uint32_t id;
|
|
|
|
|
} protocol_error;
|
2008-12-30 11:03:33 -05:00
|
|
|
int fd;
|
2011-08-19 22:50:53 -04:00
|
|
|
struct wl_map objects;
|
2014-02-07 16:50:50 -08:00
|
|
|
struct wl_event_queue display_queue;
|
2014-02-07 16:00:21 -08:00
|
|
|
struct wl_event_queue default_queue;
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_t mutex;
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
int reader_count;
|
|
|
|
|
uint32_t read_serial;
|
|
|
|
|
pthread_cond_t reader_cond;
|
2008-12-30 11:03:33 -05:00
|
|
|
};
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** \endcond */
|
|
|
|
|
|
2013-12-18 20:56:18 -06:00
|
|
|
static int debug_client = 0;
|
2011-02-10 12:27:35 -05:00
|
|
|
|
2014-08-29 11:21:28 +02:00
|
|
|
/**
|
|
|
|
|
* This helper function wakes up all threads that are
|
|
|
|
|
* waiting for display->reader_cond (i. e. when reading is done,
|
2015-03-18 18:32:54 -07:00
|
|
|
* canceled, or an error occurred)
|
2014-08-29 11:21:28 +02:00
|
|
|
*
|
|
|
|
|
* NOTE: must be called with display->mutex locked
|
|
|
|
|
*/
|
|
|
|
|
static void
|
|
|
|
|
display_wakeup_threads(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
/* Thread can get sleeping only in read_events(). If we're
|
|
|
|
|
* waking it up, it means that the read completed or was
|
|
|
|
|
* canceled, so we must increase the read_serial.
|
|
|
|
|
* This prevents from indefinite sleeping in read_events().
|
|
|
|
|
*/
|
|
|
|
|
++display->read_serial;
|
|
|
|
|
|
|
|
|
|
pthread_cond_broadcast(&display->reader_cond);
|
|
|
|
|
}
|
|
|
|
|
|
2014-06-20 09:29:52 +02:00
|
|
|
/**
|
|
|
|
|
* This function is called for local errors (no memory, server hung up)
|
|
|
|
|
*
|
|
|
|
|
* \param display
|
|
|
|
|
* \param error error value (EINVAL, EFAULT, ...)
|
|
|
|
|
*
|
|
|
|
|
* \note this function is called with display mutex locked
|
|
|
|
|
*/
|
2012-10-11 23:37:53 +02:00
|
|
|
static void
|
|
|
|
|
display_fatal_error(struct wl_display *display, int error)
|
|
|
|
|
{
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (!error)
|
2014-06-20 09:29:52 +02:00
|
|
|
error = EFAULT;
|
2012-10-11 23:37:53 +02:00
|
|
|
|
|
|
|
|
display->last_error = error;
|
2014-08-05 11:42:01 +02:00
|
|
|
|
2014-08-29 11:21:28 +02:00
|
|
|
display_wakeup_threads(display);
|
2012-10-11 23:37:53 +02:00
|
|
|
}
|
|
|
|
|
|
2014-06-20 09:29:52 +02:00
|
|
|
/**
|
|
|
|
|
* This function is called for error events
|
2015-03-18 18:32:54 -07:00
|
|
|
* and indicates that in some object an error occurred.
|
2015-10-02 17:32:52 +08:00
|
|
|
* The difference between this function and display_fatal_error()
|
2014-06-20 09:29:52 +02:00
|
|
|
* is that this one handles errors that will come by wire,
|
|
|
|
|
* whereas display_fatal_error() is called for local errors.
|
|
|
|
|
*
|
|
|
|
|
* \param display
|
|
|
|
|
* \param code error code
|
|
|
|
|
* \param id id of the object that generated the error
|
|
|
|
|
* \param intf protocol interface
|
|
|
|
|
*/
|
2012-10-11 23:37:53 +02:00
|
|
|
static void
|
2014-06-20 09:29:52 +02:00
|
|
|
display_protocol_error(struct wl_display *display, uint32_t code,
|
|
|
|
|
uint32_t id, const struct wl_interface *intf)
|
2012-10-11 23:37:53 +02:00
|
|
|
{
|
2014-06-20 09:29:52 +02:00
|
|
|
int err;
|
|
|
|
|
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
/* set correct errno */
|
|
|
|
|
if (wl_interface_equal(intf, &wl_display_interface)) {
|
|
|
|
|
switch (code) {
|
|
|
|
|
case WL_DISPLAY_ERROR_INVALID_OBJECT:
|
|
|
|
|
case WL_DISPLAY_ERROR_INVALID_METHOD:
|
|
|
|
|
err = EINVAL;
|
|
|
|
|
break;
|
|
|
|
|
case WL_DISPLAY_ERROR_NO_MEMORY:
|
|
|
|
|
err = ENOMEM;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
err = EFAULT;
|
|
|
|
|
}
|
|
|
|
|
} else {
|
|
|
|
|
err = EPROTO;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
pthread_mutex_lock(&display->mutex);
|
2014-06-20 09:29:52 +02:00
|
|
|
|
|
|
|
|
display->last_error = err;
|
|
|
|
|
|
|
|
|
|
display->protocol_error.code = code;
|
|
|
|
|
display->protocol_error.id = id;
|
|
|
|
|
display->protocol_error.interface = intf;
|
|
|
|
|
|
2014-08-05 11:42:01 +02:00
|
|
|
/*
|
2014-08-29 11:21:28 +02:00
|
|
|
* here it is not necessary to wake up threads like in
|
2014-08-05 11:42:01 +02:00
|
|
|
* display_fatal_error, because this function is called from
|
|
|
|
|
* an event handler and that means that read_events() is done
|
|
|
|
|
* and woke up all threads. Since wl_display_prepare_read()
|
|
|
|
|
* fails when there are events in the queue, no threads
|
|
|
|
|
* can sleep in read_events() during dispatching
|
|
|
|
|
* (and therefore during calling this function), so this is safe.
|
|
|
|
|
*/
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
}
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
static void
|
2012-10-11 23:37:51 +02:00
|
|
|
wl_event_queue_init(struct wl_event_queue *queue, struct wl_display *display)
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
{
|
|
|
|
|
wl_list_init(&queue->event_list);
|
2012-10-11 23:37:51 +02:00
|
|
|
queue->display = display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2014-12-19 14:53:05 +01:00
|
|
|
static void
|
|
|
|
|
decrease_closure_args_refcount(struct wl_closure *closure)
|
|
|
|
|
{
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
int i, count;
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
|
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
case 'o':
|
|
|
|
|
proxy = (struct wl_proxy *) closure->args[i].o;
|
|
|
|
|
if (proxy) {
|
|
|
|
|
if (proxy->flags & WL_PROXY_FLAG_DESTROYED)
|
|
|
|
|
closure->args[i].o = NULL;
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
proxy_destroy(struct wl_proxy *proxy);
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
static void
|
|
|
|
|
wl_event_queue_release(struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
struct wl_closure *closure;
|
2014-12-19 14:53:05 +01:00
|
|
|
struct wl_proxy *proxy;
|
2015-07-31 18:02:54 +09:00
|
|
|
bool proxy_destroyed;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
|
|
|
|
while (!wl_list_empty(&queue->event_list)) {
|
|
|
|
|
closure = container_of(queue->event_list.next,
|
|
|
|
|
struct wl_closure, link);
|
|
|
|
|
wl_list_remove(&closure->link);
|
2014-12-19 14:53:05 +01:00
|
|
|
|
|
|
|
|
decrease_closure_args_refcount(closure);
|
|
|
|
|
|
|
|
|
|
proxy = closure->proxy;
|
2015-07-31 18:02:54 +09:00
|
|
|
proxy_destroyed = !!(proxy->flags & WL_PROXY_FLAG_DESTROYED);
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (proxy_destroyed && !proxy->refcount)
|
|
|
|
|
free(proxy);
|
2014-12-19 14:53:05 +01:00
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Destroy an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param queue The event queue to be destroyed
|
|
|
|
|
*
|
|
|
|
|
* Destroy the given event queue. Any pending event on that queue is
|
|
|
|
|
* discarded.
|
|
|
|
|
*
|
2012-10-16 17:29:07 +03:00
|
|
|
* The \ref wl_display object used to create the queue should not be
|
|
|
|
|
* destroyed until all event queues created with it are destroyed with
|
|
|
|
|
* this function.
|
|
|
|
|
*
|
2012-10-12 17:28:57 +03:00
|
|
|
* \memberof wl_event_queue
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_event_queue_destroy(struct wl_event_queue *queue)
|
|
|
|
|
{
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_display *display = queue->display;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
wl_event_queue_release(queue);
|
|
|
|
|
free(queue);
|
2012-10-11 23:37:51 +02:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Create a new event queue for this display
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2012-10-15 17:53:23 +03:00
|
|
|
* \return A new event queue associated with this display or NULL on
|
2012-10-12 17:28:57 +03:00
|
|
|
* failure.
|
|
|
|
|
*
|
2015-01-02 18:29:18 -08:00
|
|
|
* \memberof wl_event_queue
|
2012-10-12 17:28:57 +03:00
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT struct wl_event_queue *
|
|
|
|
|
wl_display_create_queue(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
struct wl_event_queue *queue;
|
|
|
|
|
|
|
|
|
|
queue = malloc(sizeof *queue);
|
|
|
|
|
if (queue == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
2012-10-11 23:37:51 +02:00
|
|
|
wl_event_queue_init(queue, display);
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
return queue;
|
|
|
|
|
}
|
|
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
static struct wl_proxy *
|
|
|
|
|
proxy_create(struct wl_proxy *factory, const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
struct wl_display *display = factory->display;
|
|
|
|
|
|
|
|
|
|
proxy = malloc(sizeof *proxy);
|
|
|
|
|
if (proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
2014-08-30 17:12:26 +02:00
|
|
|
memset(proxy, 0, sizeof *proxy);
|
|
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
proxy->object.interface = interface;
|
|
|
|
|
proxy->display = display;
|
|
|
|
|
proxy->queue = factory->queue;
|
|
|
|
|
proxy->refcount = 1;
|
|
|
|
|
|
|
|
|
|
proxy->object.id = wl_map_insert_new(&display->objects, 0, proxy);
|
|
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:53:23 +03:00
|
|
|
/** Create a proxy object with a given interface
|
|
|
|
|
*
|
|
|
|
|
* \param factory Factory proxy object
|
|
|
|
|
* \param interface Interface the proxy object should use
|
|
|
|
|
* \return A newly allocated proxy object or NULL on failure
|
|
|
|
|
*
|
|
|
|
|
* This function creates a new proxy object with the supplied interface. The
|
|
|
|
|
* proxy object will have an id assigned from the client id space. The id
|
|
|
|
|
* should be created on the compositor side by sending an appropriate request
|
|
|
|
|
* with \ref wl_proxy_marshal().
|
|
|
|
|
*
|
|
|
|
|
* The proxy will inherit the display and event queue of the factory object.
|
|
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display, wl_event_queue, wl_proxy_marshal()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-10 14:02:48 -04:00
|
|
|
WL_EXPORT struct wl_proxy *
|
2011-08-19 13:44:01 -04:00
|
|
|
wl_proxy_create(struct wl_proxy *factory, const struct wl_interface *interface)
|
2008-12-30 11:03:33 -05:00
|
|
|
{
|
2011-08-19 13:44:01 -04:00
|
|
|
struct wl_display *display = factory->display;
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
struct wl_proxy *proxy;
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
proxy = proxy_create(factory, interface);
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2011-11-18 21:59:36 -05:00
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-11 14:55:59 +03:00
|
|
|
/* The caller should hold the display lock */
|
|
|
|
|
static struct wl_proxy *
|
2011-11-18 21:59:36 -05:00
|
|
|
wl_proxy_create_for_id(struct wl_proxy *factory,
|
|
|
|
|
uint32_t id, const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
struct wl_display *display = factory->display;
|
|
|
|
|
|
|
|
|
|
proxy = malloc(sizeof *proxy);
|
|
|
|
|
if (proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
2014-08-30 17:12:26 +02:00
|
|
|
memset(proxy, 0, sizeof *proxy);
|
|
|
|
|
|
2011-11-18 21:59:36 -05:00
|
|
|
proxy->object.interface = interface;
|
2011-11-15 08:58:34 -05:00
|
|
|
proxy->object.id = id;
|
2008-12-30 11:03:33 -05:00
|
|
|
proxy->display = display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
proxy->queue = factory->queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->refcount = 1;
|
2012-10-04 17:42:49 -04:00
|
|
|
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&display->objects, 0, id, proxy);
|
2008-12-30 11:03:33 -05:00
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2014-12-19 14:53:05 +01:00
|
|
|
void
|
|
|
|
|
proxy_destroy(struct wl_proxy *proxy)
|
2010-09-02 20:22:42 -04:00
|
|
|
{
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (proxy->flags & WL_PROXY_FLAG_ID_DELETED)
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
wl_map_remove(&proxy->display->objects, proxy->object.id);
|
|
|
|
|
else if (proxy->object.id < WL_SERVER_ID_START)
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&proxy->display->objects, 0,
|
2011-11-18 21:59:36 -05:00
|
|
|
proxy->object.id, WL_ZOMBIE_OBJECT);
|
|
|
|
|
else
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&proxy->display->objects, 0,
|
2011-11-18 21:59:36 -05:00
|
|
|
proxy->object.id, NULL);
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->flags |= WL_PROXY_FLAG_DESTROYED;
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
2014-12-19 14:53:05 +01:00
|
|
|
}
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
|
2014-12-19 14:53:05 +01:00
|
|
|
/** Destroy a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy to be destroyed
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_destroy(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
struct wl_display *display = proxy->display;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
proxy_destroy(proxy);
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2010-09-02 20:22:42 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Set a proxy's listener
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param implementation The listener to be added to proxy
|
|
|
|
|
* \param data User data to be associated with the proxy
|
|
|
|
|
* \return 0 on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Set proxy's listener to \c implementation and its user data to
|
2012-11-22 18:09:32 -02:00
|
|
|
* \c data. If a listener has already been set, this function
|
2012-10-12 17:28:57 +03:00
|
|
|
* fails and nothing is changed.
|
|
|
|
|
*
|
|
|
|
|
* \c implementation is a vector of function pointers. For an opcode
|
2012-11-22 18:09:32 -02:00
|
|
|
* \c n, \c implementation[n] should point to the handler of \c n for
|
2012-10-12 17:28:57 +03:00
|
|
|
* the given object.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-10 10:53:44 -04:00
|
|
|
WL_EXPORT int
|
2010-08-10 14:02:48 -04:00
|
|
|
wl_proxy_add_listener(struct wl_proxy *proxy,
|
|
|
|
|
void (**implementation)(void), void *data)
|
2008-12-21 23:37:12 -05:00
|
|
|
{
|
2013-07-17 21:58:47 -05:00
|
|
|
if (proxy->object.implementation || proxy->dispatcher) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("proxy %p already has listener\n", proxy);
|
2008-12-30 11:03:33 -05:00
|
|
|
return -1;
|
2011-02-18 15:28:54 -05:00
|
|
|
}
|
2008-12-30 11:03:33 -05:00
|
|
|
|
2011-02-18 15:28:54 -05:00
|
|
|
proxy->object.implementation = implementation;
|
|
|
|
|
proxy->user_data = data;
|
2008-12-30 11:03:33 -05:00
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2013-07-22 17:30:52 +01:00
|
|
|
/** Get a proxy's listener
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The address of the proxy's listener or NULL if no listener is set
|
|
|
|
|
*
|
|
|
|
|
* Gets the address to the proxy's listener; which is the listener set with
|
|
|
|
|
* \ref wl_proxy_add_listener.
|
|
|
|
|
*
|
2014-11-17 14:59:14 -06:00
|
|
|
* This function is useful in clients with multiple listeners on the same
|
|
|
|
|
* interface to allow the identification of which code to execute.
|
2013-07-22 17:30:52 +01:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT const void *
|
|
|
|
|
wl_proxy_get_listener(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.implementation;
|
|
|
|
|
}
|
|
|
|
|
|
2013-07-17 21:58:47 -05:00
|
|
|
/** Set a proxy's listener (with dispatcher)
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param dispatcher The dispatcher to be used for this proxy
|
|
|
|
|
* \param implementation The dispatcher-specific listener implementation
|
|
|
|
|
* \param data User data to be associated with the proxy
|
|
|
|
|
* \return 0 on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Set proxy's listener to use \c dispatcher_func as its dispatcher and \c
|
|
|
|
|
* dispatcher_data as its dispatcher-specific implementation and its user data
|
|
|
|
|
* to \c data. If a listener has already been set, this function
|
|
|
|
|
* fails and nothing is changed.
|
|
|
|
|
*
|
|
|
|
|
* The exact details of dispatcher_data depend on the dispatcher used. This
|
|
|
|
|
* function is intended to be used by language bindings, not user code.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_proxy_add_dispatcher(struct wl_proxy *proxy,
|
|
|
|
|
wl_dispatcher_func_t dispatcher,
|
|
|
|
|
const void *implementation, void *data)
|
|
|
|
|
{
|
|
|
|
|
if (proxy->object.implementation || proxy->dispatcher) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("proxy %p already has listener\n");
|
2013-07-17 21:58:47 -05:00
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
proxy->object.implementation = implementation;
|
|
|
|
|
proxy->dispatcher = dispatcher;
|
|
|
|
|
proxy->user_data = data;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
static struct wl_proxy *
|
|
|
|
|
create_outgoing_proxy(struct wl_proxy *proxy, const struct wl_message *message,
|
|
|
|
|
union wl_argument *args,
|
|
|
|
|
const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
int i, count;
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
struct wl_proxy *new_proxy = NULL;
|
|
|
|
|
|
|
|
|
|
signature = message->signature;
|
|
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
new_proxy = proxy_create(proxy, interface);
|
|
|
|
|
if (new_proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
args[i].o = &new_proxy->object;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return new_proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:53:23 +03:00
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param args Extra arguments for the given request
|
2013-12-09 22:35:22 +01:00
|
|
|
* \param interface The interface to use for the new proxy
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
2015-10-02 17:32:53 +08:00
|
|
|
* This function translates a request given an opcode, an interface and a
|
|
|
|
|
* wl_argument array to the wire format and writes it to the connection
|
|
|
|
|
* buffer.
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
*
|
|
|
|
|
* For new-id arguments, this function will allocate a new wl_proxy
|
|
|
|
|
* and send the ID to the server. The new wl_proxy will be returned
|
|
|
|
|
* on success or NULL on errror with errno set accordingly.
|
|
|
|
|
*
|
|
|
|
|
* \note This is intended to be used by language bindings and not in
|
|
|
|
|
* non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_marshal()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT struct wl_proxy *
|
|
|
|
|
wl_proxy_marshal_array_constructor(struct wl_proxy *proxy,
|
|
|
|
|
uint32_t opcode, union wl_argument *args,
|
|
|
|
|
const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_closure *closure;
|
|
|
|
|
struct wl_proxy *new_proxy = NULL;
|
|
|
|
|
const struct wl_message *message;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&proxy->display->mutex);
|
|
|
|
|
|
|
|
|
|
message = &proxy->object.interface->methods[opcode];
|
|
|
|
|
if (interface) {
|
|
|
|
|
new_proxy = create_outgoing_proxy(proxy, message,
|
|
|
|
|
args, interface);
|
|
|
|
|
if (new_proxy == NULL)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
closure = wl_closure_marshal(&proxy->object, opcode, args, message);
|
2015-11-16 11:49:02 +01:00
|
|
|
if (closure == NULL)
|
|
|
|
|
wl_abort("Error marshalling request: %s\n", strerror(errno));
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_closure_print(closure, &proxy->object, true);
|
|
|
|
|
|
2015-11-16 11:49:02 +01:00
|
|
|
if (wl_closure_send(closure, proxy->display->connection))
|
|
|
|
|
wl_abort("Error sending request: %s\n", strerror(errno));
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
|
|
|
|
|
err_unlock:
|
|
|
|
|
pthread_mutex_unlock(&proxy->display->mutex);
|
|
|
|
|
|
|
|
|
|
return new_proxy;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
|
|
|
|
* \param ... Extra arguments for the given request
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* This function is similar to wl_proxy_marshal_constructor(), except
|
|
|
|
|
* it doesn't create proxies for new-id arguments.
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_create()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-09 21:25:50 -04:00
|
|
|
WL_EXPORT void
|
2008-12-30 11:03:33 -05:00
|
|
|
wl_proxy_marshal(struct wl_proxy *proxy, uint32_t opcode, ...)
|
|
|
|
|
{
|
2013-07-17 21:58:47 -05:00
|
|
|
union wl_argument args[WL_CLOSURE_MAX_ARGS];
|
2008-12-30 11:03:33 -05:00
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
|
|
va_start(ap, opcode);
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_argument_from_va_list(proxy->object.interface->methods[opcode].signature,
|
|
|
|
|
args, WL_CLOSURE_MAX_ARGS, ap);
|
2008-12-30 11:03:33 -05:00
|
|
|
va_end(ap);
|
2010-09-07 21:34:45 -04:00
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_proxy_marshal_array_constructor(proxy, opcode, args, NULL);
|
2013-07-17 21:58:47 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
2013-12-09 22:35:22 +01:00
|
|
|
* \param interface The interface to use for the new proxy
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param ... Extra arguments for the given request
|
|
|
|
|
* \return A new wl_proxy for the new_id argument or NULL on error
|
2013-07-17 21:58:47 -05:00
|
|
|
*
|
2015-10-02 17:32:53 +08:00
|
|
|
* This function translates a request given an opcode, an interface and extra
|
|
|
|
|
* arguments to the wire format and writes it to the connection buffer. The
|
|
|
|
|
* types of the extra arguments must correspond to the argument types of the
|
|
|
|
|
* method associated with the opcode in the interface.
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
*
|
|
|
|
|
* For new-id arguments, this function will allocate a new wl_proxy
|
|
|
|
|
* and send the ID to the server. The new wl_proxy will be returned
|
|
|
|
|
* on success or NULL on errror with errno set accordingly.
|
|
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT struct wl_proxy *
|
|
|
|
|
wl_proxy_marshal_constructor(struct wl_proxy *proxy, uint32_t opcode,
|
|
|
|
|
const struct wl_interface *interface, ...)
|
|
|
|
|
{
|
|
|
|
|
union wl_argument args[WL_CLOSURE_MAX_ARGS];
|
|
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
|
|
va_start(ap, interface);
|
|
|
|
|
wl_argument_from_va_list(proxy->object.interface->methods[opcode].signature,
|
|
|
|
|
args, WL_CLOSURE_MAX_ARGS, ap);
|
|
|
|
|
va_end(ap);
|
|
|
|
|
|
|
|
|
|
return wl_proxy_marshal_array_constructor(proxy, opcode,
|
|
|
|
|
args, interface);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
|
|
|
|
* \param args Extra arguments for the given request
|
|
|
|
|
*
|
|
|
|
|
* This function is similar to wl_proxy_marshal_array_constructor(), except
|
|
|
|
|
* it doesn't create proxies for new-id arguments.
|
2013-07-17 21:58:47 -05:00
|
|
|
*
|
|
|
|
|
* \note This is intended to be used by language bindings and not in
|
|
|
|
|
* non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_marshal()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_marshal_array(struct wl_proxy *proxy, uint32_t opcode,
|
|
|
|
|
union wl_argument *args)
|
|
|
|
|
{
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_proxy_marshal_array_constructor(proxy, opcode, args, NULL);
|
2008-12-30 11:03:33 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-24 19:30:25 -05:00
|
|
|
static void
|
2011-05-11 10:57:06 -04:00
|
|
|
display_handle_error(void *data,
|
2013-06-27 20:09:18 -05:00
|
|
|
struct wl_display *display, void *object,
|
2011-05-11 10:57:06 -04:00
|
|
|
uint32_t code, const char *message)
|
2008-12-24 19:30:25 -05:00
|
|
|
{
|
2013-06-27 20:09:18 -05:00
|
|
|
struct wl_proxy *proxy = object;
|
2012-10-11 23:37:42 +02:00
|
|
|
|
|
|
|
|
wl_log("%s@%u: error %d: %s\n",
|
2013-06-27 20:09:18 -05:00
|
|
|
proxy->object.interface->name, proxy->object.id, code, message);
|
2012-10-11 23:37:42 +02:00
|
|
|
|
2014-06-20 09:29:52 +02:00
|
|
|
display_protocol_error(display, code, proxy->object.id,
|
|
|
|
|
proxy->object.interface);
|
2008-12-24 19:30:25 -05:00
|
|
|
}
|
|
|
|
|
|
2011-11-15 22:20:28 -05:00
|
|
|
static void
|
|
|
|
|
display_handle_delete_id(void *data, struct wl_display *display, uint32_t id)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2011-11-15 22:20:28 -05:00
|
|
|
proxy = wl_map_lookup(&display->objects, id);
|
2013-04-04 17:26:57 +01:00
|
|
|
|
|
|
|
|
if (!proxy)
|
|
|
|
|
wl_log("error: received delete_id for unknown id (%u)\n", id);
|
|
|
|
|
|
|
|
|
|
if (proxy && proxy != WL_ZOMBIE_OBJECT)
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->flags |= WL_PROXY_FLAG_ID_DELETED;
|
2011-11-15 22:20:28 -05:00
|
|
|
else
|
|
|
|
|
wl_map_remove(&display->objects, id);
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2011-11-15 22:20:28 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-24 19:30:25 -05:00
|
|
|
static const struct wl_display_listener display_listener = {
|
2011-05-11 10:57:06 -04:00
|
|
|
display_handle_error,
|
2011-11-15 22:20:28 -05:00
|
|
|
display_handle_delete_id
|
2008-12-24 19:30:25 -05:00
|
|
|
};
|
|
|
|
|
|
2011-04-11 09:14:43 -04:00
|
|
|
static int
|
2012-08-14 13:16:10 -04:00
|
|
|
connect_to_socket(const char *name)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-12-07 15:22:22 -05:00
|
|
|
struct sockaddr_un addr;
|
2008-10-07 10:10:36 -04:00
|
|
|
socklen_t size;
|
2010-12-01 15:36:20 -05:00
|
|
|
const char *runtime_dir;
|
2012-08-14 13:16:10 -04:00
|
|
|
int name_size, fd;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2010-12-01 15:36:20 -05:00
|
|
|
runtime_dir = getenv("XDG_RUNTIME_DIR");
|
2012-06-06 14:30:18 +03:00
|
|
|
if (!runtime_dir) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("error: XDG_RUNTIME_DIR not set in the environment.\n");
|
2012-06-06 14:30:18 +03:00
|
|
|
/* to prevent programs reporting
|
|
|
|
|
* "failed to create display: Success" */
|
|
|
|
|
errno = ENOENT;
|
|
|
|
|
return -1;
|
2010-12-01 15:36:20 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (name == NULL)
|
|
|
|
|
name = getenv("WAYLAND_DISPLAY");
|
2015-08-17 15:20:28 +03:00
|
|
|
if (name == NULL)
|
|
|
|
|
name = "wayland-0";
|
2010-12-01 15:36:20 -05:00
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
fd = wl_os_socket_cloexec(PF_LOCAL, SOCK_STREAM, 0);
|
|
|
|
|
if (fd < 0)
|
2012-06-06 14:30:18 +03:00
|
|
|
return -1;
|
|
|
|
|
|
2010-12-01 15:36:20 -05:00
|
|
|
memset(&addr, 0, sizeof addr);
|
2008-12-07 15:22:22 -05:00
|
|
|
addr.sun_family = AF_LOCAL;
|
2010-12-01 15:36:20 -05:00
|
|
|
name_size =
|
|
|
|
|
snprintf(addr.sun_path, sizeof addr.sun_path,
|
|
|
|
|
"%s/%s", runtime_dir, name) + 1;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-06-15 21:39:50 +00:00
|
|
|
assert(name_size > 0);
|
|
|
|
|
if (name_size > (int)sizeof addr.sun_path) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("error: socket path \"%s/%s\" plus null terminator"
|
2012-06-15 21:39:50 +00:00
|
|
|
" exceeds 108 bytes\n", runtime_dir, name);
|
2012-08-14 13:16:10 -04:00
|
|
|
close(fd);
|
2012-06-15 21:39:50 +00:00
|
|
|
/* to prevent programs reporting
|
|
|
|
|
* "failed to add socket: Success" */
|
|
|
|
|
errno = ENAMETOOLONG;
|
|
|
|
|
return -1;
|
|
|
|
|
};
|
|
|
|
|
|
2008-12-07 15:22:22 -05:00
|
|
|
size = offsetof (struct sockaddr_un, sun_path) + name_size;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
if (connect(fd, (struct sockaddr *) &addr, size) < 0) {
|
|
|
|
|
close(fd);
|
2011-04-11 09:14:43 -04:00
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
return fd;
|
2011-04-11 09:14:43 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Connect to Wayland display on an already open fd
|
|
|
|
|
*
|
|
|
|
|
* \param fd The fd to use for the connection
|
|
|
|
|
* \return A \ref wl_display object or \c NULL on failure
|
|
|
|
|
*
|
2012-10-15 17:50:36 -04:00
|
|
|
* The wl_display takes ownership of the fd and will close it when the
|
|
|
|
|
* display is destroyed. The fd will also be closed in case of
|
|
|
|
|
* failure.
|
|
|
|
|
*
|
2012-10-12 17:28:57 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2011-04-11 09:14:43 -04:00
|
|
|
WL_EXPORT struct wl_display *
|
2012-08-14 13:16:10 -04:00
|
|
|
wl_display_connect_to_fd(int fd)
|
2011-04-11 09:14:43 -04:00
|
|
|
{
|
|
|
|
|
struct wl_display *display;
|
|
|
|
|
const char *debug;
|
|
|
|
|
|
|
|
|
|
debug = getenv("WAYLAND_DEBUG");
|
2012-11-21 17:14:55 -05:00
|
|
|
if (debug && (strstr(debug, "client") || strstr(debug, "1")))
|
2013-12-18 20:56:18 -06:00
|
|
|
debug_client = 1;
|
2011-04-11 09:14:43 -04:00
|
|
|
|
|
|
|
|
display = malloc(sizeof *display);
|
2012-10-15 17:50:36 -04:00
|
|
|
if (display == NULL) {
|
|
|
|
|
close(fd);
|
2011-04-11 09:14:43 -04:00
|
|
|
return NULL;
|
2012-10-15 17:50:36 -04:00
|
|
|
}
|
2011-04-11 09:14:43 -04:00
|
|
|
|
|
|
|
|
memset(display, 0, sizeof *display);
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
display->fd = fd;
|
2013-06-01 17:40:52 -05:00
|
|
|
wl_map_init(&display->objects, WL_MAP_CLIENT_SIDE);
|
2014-02-07 16:00:21 -08:00
|
|
|
wl_event_queue_init(&display->default_queue, display);
|
2014-02-07 16:50:50 -08:00
|
|
|
wl_event_queue_init(&display->display_queue, display);
|
2012-10-11 17:11:54 -04:00
|
|
|
pthread_mutex_init(&display->mutex, NULL);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_cond_init(&display->reader_cond, NULL);
|
|
|
|
|
display->reader_count = 0;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_new(&display->objects, 0, NULL);
|
2011-08-19 22:50:53 -04:00
|
|
|
|
2010-12-01 17:07:41 -05:00
|
|
|
display->proxy.object.interface = &wl_display_interface;
|
2011-11-18 21:59:36 -05:00
|
|
|
display->proxy.object.id =
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_new(&display->objects, 0, display);
|
2008-12-21 21:50:23 -05:00
|
|
|
display->proxy.display = display;
|
2011-08-19 22:50:53 -04:00
|
|
|
display->proxy.object.implementation = (void(**)(void)) &display_listener;
|
2011-02-18 15:28:54 -05:00
|
|
|
display->proxy.user_data = display;
|
2014-02-07 16:00:21 -08:00
|
|
|
display->proxy.queue = &display->default_queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
display->proxy.flags = 0;
|
|
|
|
|
display->proxy.refcount = 1;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-04 16:54:22 -04:00
|
|
|
display->connection = wl_connection_create(display->fd);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
if (display->connection == NULL)
|
|
|
|
|
goto err_connection;
|
2011-04-14 10:38:44 -04:00
|
|
|
|
2008-10-08 13:32:07 -04:00
|
|
|
return display;
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
err_connection:
|
|
|
|
|
pthread_mutex_destroy(&display->mutex);
|
|
|
|
|
pthread_cond_destroy(&display->reader_cond);
|
|
|
|
|
wl_map_release(&display->objects);
|
|
|
|
|
close(display->fd);
|
|
|
|
|
free(display);
|
|
|
|
|
|
|
|
|
|
return NULL;
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Connect to a Wayland display
|
|
|
|
|
*
|
|
|
|
|
* \param name Name of the Wayland display to connect to
|
|
|
|
|
* \return A \ref wl_display object or \c NULL on failure
|
|
|
|
|
*
|
|
|
|
|
* Connect to the Wayland display named \c name. If \c name is \c NULL,
|
2015-08-17 15:20:28 +03:00
|
|
|
* its value will be replaced with the WAYLAND_DISPLAY environment
|
|
|
|
|
* variable if it is set, otherwise display "wayland-0" will be used.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-08-14 13:16:10 -04:00
|
|
|
WL_EXPORT struct wl_display *
|
|
|
|
|
wl_display_connect(const char *name)
|
|
|
|
|
{
|
|
|
|
|
char *connection, *end;
|
|
|
|
|
int flags, fd;
|
|
|
|
|
|
|
|
|
|
connection = getenv("WAYLAND_SOCKET");
|
|
|
|
|
if (connection) {
|
2014-11-05 17:40:18 +02:00
|
|
|
int prev_errno = errno;
|
|
|
|
|
errno = 0;
|
2012-08-14 13:16:10 -04:00
|
|
|
fd = strtol(connection, &end, 0);
|
2014-11-05 17:40:18 +02:00
|
|
|
if (errno != 0 || connection == end || *end != '\0')
|
2012-08-14 13:16:10 -04:00
|
|
|
return NULL;
|
2014-11-05 17:40:18 +02:00
|
|
|
errno = prev_errno;
|
2012-08-14 13:16:10 -04:00
|
|
|
|
|
|
|
|
flags = fcntl(fd, F_GETFD);
|
|
|
|
|
if (flags != -1)
|
|
|
|
|
fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
|
|
|
|
|
unsetenv("WAYLAND_SOCKET");
|
|
|
|
|
} else {
|
|
|
|
|
fd = connect_to_socket(name);
|
|
|
|
|
if (fd < 0)
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:50:36 -04:00
|
|
|
return wl_display_connect_to_fd(fd);
|
2012-08-14 13:16:10 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Close a connection to a Wayland display
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
*
|
|
|
|
|
* Close the connection to \c display and free all resources associated
|
|
|
|
|
* with it.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2008-11-08 15:39:41 -05:00
|
|
|
WL_EXPORT void
|
2012-02-27 17:10:03 +01:00
|
|
|
wl_display_disconnect(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-10-08 13:32:07 -04:00
|
|
|
wl_connection_destroy(display->connection);
|
2011-08-19 22:50:53 -04:00
|
|
|
wl_map_release(&display->objects);
|
2014-02-07 16:00:21 -08:00
|
|
|
wl_event_queue_release(&display->default_queue);
|
2014-12-19 14:53:04 +01:00
|
|
|
wl_event_queue_release(&display->display_queue);
|
2012-10-11 17:11:54 -04:00
|
|
|
pthread_mutex_destroy(&display->mutex);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_cond_destroy(&display->reader_cond);
|
2013-07-09 18:59:11 -04:00
|
|
|
close(display->fd);
|
2012-08-14 13:16:10 -04:00
|
|
|
|
2008-10-08 13:32:07 -04:00
|
|
|
free(display);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get a display context's file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return Display object file descriptor
|
|
|
|
|
*
|
|
|
|
|
* Return the file descriptor associated with a display so it can be
|
|
|
|
|
* integrated into the client's main loop.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2008-11-08 15:39:41 -05:00
|
|
|
WL_EXPORT int
|
2012-10-04 16:54:22 -04:00
|
|
|
wl_display_get_fd(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-10-08 13:32:07 -04:00
|
|
|
return display->fd;
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2011-07-29 19:51:22 -07:00
|
|
|
static void
|
Switch protocol to using serial numbers for ordering events and requests
The wayland protocol, as X, uses timestamps to match up certain
requests with input events. The problem is that sometimes we need to
send out an event that doesn't have a corresponding timestamped input
event. For example, the pointer focus surface goes away and new
surface needs to receive a pointer enter event. These events are
normally timestamped with the evdev event timestamp, but in this case,
we don't have a evdev timestamp. So we have to go to gettimeofday (or
clock_gettime()) and then we don't know if it's coming from the same
time source etc.
However for all these cases we don't need a real time timestamp, we
just need a serial number that encodes the order of events inside the
server. So we introduce a serial number mechanism that we can use to
order events. We still need real-time timestamps for actual input
device events (motion, buttons, keys, touch), to be able to reason
about double-click speed and movement speed so events that correspond to user input carry both a serial number and a timestamp.
The serial number also give us a mechanism to key together events that
are "logically the same" such as a unicode event and a keycode event,
or a motion event and a relative event from a raw device.
2012-04-11 22:25:51 -04:00
|
|
|
sync_callback(void *data, struct wl_callback *callback, uint32_t serial)
|
2010-09-03 14:46:38 -04:00
|
|
|
{
|
2014-08-05 15:21:36 -04:00
|
|
|
int *done = data;
|
2010-09-03 14:46:38 -04:00
|
|
|
|
2014-08-05 15:21:36 -04:00
|
|
|
*done = 1;
|
|
|
|
|
wl_callback_destroy(callback);
|
2010-09-03 14:46:38 -04:00
|
|
|
}
|
|
|
|
|
|
2011-07-29 19:51:22 -07:00
|
|
|
static const struct wl_callback_listener sync_listener = {
|
|
|
|
|
sync_callback
|
|
|
|
|
};
|
2010-09-03 14:46:38 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Block until all pending request are processed by the server
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2014-08-20 19:29:09 +03:00
|
|
|
* \param queue The queue on which to run the roundtrip
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
2015-10-02 17:32:56 +08:00
|
|
|
* This function blocks until the server has processed all currently issued
|
|
|
|
|
* requests by sending a request to the display server and waiting for a
|
|
|
|
|
* reply before returning.
|
|
|
|
|
*
|
|
|
|
|
* \note This function may dispatch other events being received on the given
|
|
|
|
|
* queue.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* \note This function uses wl_display_dispatch_queue() internally. If you
|
|
|
|
|
* are using wl_display_read_events() from more threads, don't use this function
|
|
|
|
|
* (or make sure that calling wl_display_roundtrip_queue() doesn't interfere
|
|
|
|
|
* with calling wl_display_prepare_read() and wl_display_read_events())
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_roundtrip()
|
2015-01-02 18:29:18 -08:00
|
|
|
* \memberof wl_event_queue
|
2012-10-12 17:28:57 +03:00
|
|
|
*/
|
2012-10-11 23:37:53 +02:00
|
|
|
WL_EXPORT int
|
2014-08-20 19:29:09 +03:00
|
|
|
wl_display_roundtrip_queue(struct wl_display *display, struct wl_event_queue *queue)
|
2011-07-29 19:51:22 -07:00
|
|
|
{
|
|
|
|
|
struct wl_callback *callback;
|
2012-10-11 23:37:53 +02:00
|
|
|
int done, ret = 0;
|
2011-07-29 19:51:22 -07:00
|
|
|
|
|
|
|
|
done = 0;
|
|
|
|
|
callback = wl_display_sync(display);
|
2013-07-13 00:42:14 -04:00
|
|
|
if (callback == NULL)
|
|
|
|
|
return -1;
|
2014-08-20 19:29:09 +03:00
|
|
|
wl_proxy_set_queue((struct wl_proxy *) callback, queue);
|
2011-07-29 19:51:22 -07:00
|
|
|
wl_callback_add_listener(callback, &sync_listener, &done);
|
2012-11-26 23:25:53 +01:00
|
|
|
while (!done && ret >= 0)
|
2014-08-20 19:29:09 +03:00
|
|
|
ret = wl_display_dispatch_queue(display, queue);
|
2012-10-11 23:37:53 +02:00
|
|
|
|
2012-11-26 23:25:53 +01:00
|
|
|
if (ret == -1 && !done)
|
|
|
|
|
wl_callback_destroy(callback);
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
return ret;
|
2010-09-03 14:46:38 -04:00
|
|
|
}
|
|
|
|
|
|
2014-08-20 19:29:09 +03:00
|
|
|
/** Block until all pending request are processed by the server
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
|
|
|
|
*
|
2015-10-02 17:32:56 +08:00
|
|
|
* This function blocks until the server has processed all currently issued
|
|
|
|
|
* requests by sending a request to the display server and waiting for a reply
|
|
|
|
|
* before returning.
|
|
|
|
|
*
|
|
|
|
|
* \note This function may dispatch other events being received on the default
|
|
|
|
|
* queue.
|
2014-08-20 19:29:09 +03:00
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* \note This function uses wl_display_dispatch_queue() internally. If you
|
|
|
|
|
* are using wl_display_read_events() from more threads, don't use this function
|
|
|
|
|
* (or make sure that calling wl_display_roundtrip() doesn't interfere
|
|
|
|
|
* with calling wl_display_prepare_read() and wl_display_read_events())
|
|
|
|
|
*
|
2014-08-20 19:29:09 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_roundtrip(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
return wl_display_roundtrip_queue(display, &display->default_queue);
|
|
|
|
|
}
|
|
|
|
|
|
2012-06-28 22:01:58 -04:00
|
|
|
static int
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
create_proxies(struct wl_proxy *sender, struct wl_closure *closure)
|
2012-06-28 22:01:58 -04:00
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
const char *signature;
|
2012-07-23 19:54:41 +01:00
|
|
|
struct argument_details arg;
|
2012-06-28 22:01:58 -04:00
|
|
|
uint32_t id;
|
|
|
|
|
int i;
|
2012-07-23 19:54:41 +01:00
|
|
|
int count;
|
2012-06-28 22:01:58 -04:00
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
2013-02-26 11:30:51 -05:00
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
2012-07-23 19:54:41 +01:00
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
2012-06-28 22:01:58 -04:00
|
|
|
case 'n':
|
2013-02-26 11:30:51 -05:00
|
|
|
id = closure->args[i].n;
|
2012-07-23 19:54:41 +01:00
|
|
|
if (id == 0) {
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->args[i].o = NULL;
|
2012-07-23 19:54:41 +01:00
|
|
|
break;
|
|
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
proxy = wl_proxy_create_for_id(sender, id,
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->message->types[i]);
|
2012-06-28 22:01:58 -04:00
|
|
|
if (proxy == NULL)
|
|
|
|
|
return -1;
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->args[i].o = (struct wl_object *)proxy;
|
2012-07-23 19:54:41 +01:00
|
|
|
break;
|
|
|
|
|
default:
|
2012-06-28 22:01:58 -04:00
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
static void
|
|
|
|
|
increase_closure_args_refcount(struct wl_closure *closure)
|
|
|
|
|
{
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
int i, count;
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
2013-02-26 11:30:51 -05:00
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
case 'o':
|
2013-02-26 11:30:51 -05:00
|
|
|
proxy = (struct wl_proxy *) closure->args[i].o;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (proxy)
|
|
|
|
|
proxy->refcount++;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
static int
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
queue_event(struct wl_display *display, int len)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2012-10-04 17:34:18 -04:00
|
|
|
uint32_t p[2], id;
|
|
|
|
|
int opcode, size;
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_proxy *proxy;
|
2012-06-12 17:45:25 -04:00
|
|
|
struct wl_closure *closure;
|
2010-09-01 17:18:33 -04:00
|
|
|
const struct wl_message *message;
|
2014-02-07 16:50:50 -08:00
|
|
|
struct wl_event_queue *queue;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
wl_connection_copy(display->connection, p, sizeof p);
|
|
|
|
|
id = p[0];
|
|
|
|
|
opcode = p[1] & 0xffff;
|
|
|
|
|
size = p[1] >> 16;
|
|
|
|
|
if (len < size)
|
|
|
|
|
return 0;
|
2008-12-30 11:03:33 -05:00
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
proxy = wl_map_lookup(&display->objects, id);
|
2011-11-15 22:20:28 -05:00
|
|
|
if (proxy == WL_ZOMBIE_OBJECT) {
|
|
|
|
|
wl_connection_consume(display->connection, size);
|
2012-10-04 17:34:18 -04:00
|
|
|
return size;
|
2012-10-11 17:12:50 -04:00
|
|
|
} else if (proxy == NULL) {
|
2010-08-30 09:47:36 -04:00
|
|
|
wl_connection_consume(display->connection, size);
|
2012-10-04 17:34:18 -04:00
|
|
|
return size;
|
2010-08-30 09:47:36 -04:00
|
|
|
}
|
|
|
|
|
|
2010-12-01 17:07:41 -05:00
|
|
|
message = &proxy->object.interface->events[opcode];
|
2012-06-12 17:45:25 -04:00
|
|
|
closure = wl_connection_demarshal(display->connection, size,
|
|
|
|
|
&display->objects, message);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (!closure)
|
|
|
|
|
return -1;
|
2010-08-30 09:47:36 -04:00
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
if (create_proxies(proxy, closure) < 0) {
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return -1;
|
2011-07-18 02:00:24 -04:00
|
|
|
}
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (wl_closure_lookup_objects(closure, &display->objects) != 0) {
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
increase_closure_args_refcount(closure);
|
|
|
|
|
proxy->refcount++;
|
|
|
|
|
closure->proxy = proxy;
|
|
|
|
|
|
2014-02-07 16:50:50 -08:00
|
|
|
if (proxy == &display->proxy)
|
|
|
|
|
queue = &display->display_queue;
|
|
|
|
|
else
|
|
|
|
|
queue = proxy->queue;
|
|
|
|
|
|
|
|
|
|
wl_list_insert(queue->event_list.prev, &closure->link);
|
2012-10-04 17:34:18 -04:00
|
|
|
|
|
|
|
|
return size;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
dispatch_event(struct wl_display *display, struct wl_event_queue *queue)
|
2012-10-04 17:34:18 -04:00
|
|
|
{
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_closure *closure;
|
2012-10-04 17:34:18 -04:00
|
|
|
struct wl_proxy *proxy;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
int opcode;
|
|
|
|
|
bool proxy_destroyed;
|
2012-10-04 17:34:18 -04:00
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
closure = container_of(queue->event_list.next,
|
|
|
|
|
struct wl_closure, link);
|
2012-10-04 17:34:18 -04:00
|
|
|
wl_list_remove(&closure->link);
|
2013-02-26 11:30:51 -05:00
|
|
|
opcode = closure->opcode;
|
2012-10-04 17:34:18 -04:00
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
/* Verify that the receiving object is still valid by checking if has
|
|
|
|
|
* been destroyed by the application. */
|
|
|
|
|
|
|
|
|
|
decrease_closure_args_refcount(closure);
|
|
|
|
|
proxy = closure->proxy;
|
|
|
|
|
proxy_destroyed = !!(proxy->flags & WL_PROXY_FLAG_DESTROYED);
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (proxy_destroyed) {
|
2013-03-07 23:32:39 +01:00
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return;
|
|
|
|
|
}
|
2012-10-04 17:42:49 -04:00
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2012-10-04 17:34:18 -04:00
|
|
|
|
2013-07-17 21:58:47 -05:00
|
|
|
if (proxy->dispatcher) {
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_closure_print(closure, &proxy->object, false);
|
|
|
|
|
|
|
|
|
|
wl_closure_dispatch(closure, proxy->dispatcher,
|
|
|
|
|
&proxy->object, opcode);
|
|
|
|
|
} else if (proxy->object.implementation) {
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
2012-10-09 12:14:34 -04:00
|
|
|
wl_closure_print(closure, &proxy->object, false);
|
|
|
|
|
|
2013-03-08 22:26:12 -06:00
|
|
|
wl_closure_invoke(closure, WL_CLOSURE_INVOKE_CLIENT,
|
2013-07-17 21:58:47 -05:00
|
|
|
&proxy->object, opcode, proxy->user_data);
|
2012-10-09 12:14:34 -04:00
|
|
|
}
|
|
|
|
|
|
2012-06-12 17:45:25 -04:00
|
|
|
wl_closure_destroy(closure);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-11 17:15:08 -04:00
|
|
|
static int
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
read_events(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
int total, rem, size;
|
|
|
|
|
uint32_t serial;
|
|
|
|
|
|
|
|
|
|
display->reader_count--;
|
|
|
|
|
if (display->reader_count == 0) {
|
|
|
|
|
total = wl_connection_read(display->connection);
|
|
|
|
|
if (total == -1) {
|
2014-09-10 12:47:13 +02:00
|
|
|
if (errno == EAGAIN) {
|
|
|
|
|
/* we must wake up threads whenever
|
|
|
|
|
* the reader_count dropped to 0 */
|
|
|
|
|
display_wakeup_threads(display);
|
|
|
|
|
|
2013-07-09 17:55:45 -04:00
|
|
|
return 0;
|
2014-09-10 12:47:13 +02:00
|
|
|
}
|
2013-07-09 17:55:45 -04:00
|
|
|
|
|
|
|
|
display_fatal_error(display, errno);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2013-07-09 14:10:45 +01:00
|
|
|
} else if (total == 0) {
|
|
|
|
|
/* The compositor has closed the socket. This
|
|
|
|
|
* should be considered an error so we'll fake
|
|
|
|
|
* an errno */
|
|
|
|
|
errno = EPIPE;
|
|
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
return -1;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
2013-07-09 14:10:45 +01:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
for (rem = total; rem >= 8; rem -= size) {
|
|
|
|
|
size = queue_event(display, rem);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (size == -1) {
|
|
|
|
|
display_fatal_error(display, errno);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2012-10-11 23:37:53 +02:00
|
|
|
} else if (size == 0) {
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
break;
|
2012-10-11 23:37:53 +02:00
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
2014-08-29 11:21:28 +02:00
|
|
|
display_wakeup_threads(display);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
} else {
|
|
|
|
|
serial = display->read_serial;
|
|
|
|
|
while (display->read_serial == serial)
|
|
|
|
|
pthread_cond_wait(&display->reader_cond,
|
|
|
|
|
&display->mutex);
|
2014-10-27 09:19:46 +01:00
|
|
|
|
|
|
|
|
if (display->last_error) {
|
|
|
|
|
errno = display->last_error;
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
2012-10-04 17:42:49 -04:00
|
|
|
}
|
2011-10-20 15:05:11 -04:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2014-09-10 12:47:14 +02:00
|
|
|
static void
|
|
|
|
|
cancel_read(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
display->reader_count--;
|
|
|
|
|
if (display->reader_count == 0)
|
|
|
|
|
display_wakeup_threads(display);
|
|
|
|
|
}
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
/** Read events from display file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return 0 on success or -1 on error. In case of error errno will
|
|
|
|
|
* be set accordingly
|
|
|
|
|
*
|
|
|
|
|
* This will read events from the file descriptor for the display.
|
|
|
|
|
* This function does not dispatch events, it only reads and queues
|
2014-12-05 14:00:05 +01:00
|
|
|
* events into their corresponding event queues. If no data is
|
2014-11-11 18:43:01 -08:00
|
|
|
* available on the file descriptor, wl_display_read_events() returns
|
2014-12-05 14:00:05 +01:00
|
|
|
* immediately. To dispatch events that may have been queued, call
|
|
|
|
|
* wl_display_dispatch_pending() or wl_display_dispatch_queue_pending().
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
|
|
|
|
* Before calling this function, wl_display_prepare_read() must be
|
2014-12-05 14:00:05 +01:00
|
|
|
* called first. When running in more threads (which is the usual
|
|
|
|
|
* case, since we'd use wl_display_dispatch() otherwise), every thread
|
|
|
|
|
* must call wl_display_prepare_read() before calling this function.
|
|
|
|
|
*
|
|
|
|
|
* After calling wl_display_prepare_read() there can be some extra code
|
|
|
|
|
* before calling wl_display_read_events(), for example poll() or alike.
|
|
|
|
|
* Example of code in a thread:
|
|
|
|
|
*
|
|
|
|
|
* \code
|
|
|
|
|
*
|
|
|
|
|
* while (wl_display_prepare_read(display) < 0)
|
|
|
|
|
* wl_display_dispatch_pending(display);
|
|
|
|
|
* wl_display_flush(display);
|
|
|
|
|
*
|
|
|
|
|
* ... some code ...
|
|
|
|
|
*
|
|
|
|
|
* fds[0].fd = wl_display_get_fd(display);
|
|
|
|
|
* fds[0].events = POLLIN;
|
|
|
|
|
* poll(fds, 1, -1);
|
|
|
|
|
*
|
|
|
|
|
* if (!everything_ok()) {
|
|
|
|
|
* wl_display_cancel_read(display);
|
|
|
|
|
* handle_error();
|
|
|
|
|
* }
|
|
|
|
|
*
|
|
|
|
|
* if (wl_display_read_events(display) < 0)
|
|
|
|
|
* handle_error();
|
|
|
|
|
*
|
|
|
|
|
* ...
|
|
|
|
|
* \endcode
|
|
|
|
|
*
|
|
|
|
|
* After wl_display_prepare_read() succeeds, other threads that enter
|
|
|
|
|
* wl_display_read_events() will sleep until the very last thread enters
|
|
|
|
|
* it too or cancels. Therefore when the display fd becomes (or already
|
|
|
|
|
* is) readable, wl_display_read_events() should be called as soon as
|
|
|
|
|
* possible to unblock all threads. If wl_display_read_events() will not
|
|
|
|
|
* be called, then wl_display_cancel_read() must be called instead to let
|
|
|
|
|
* the other threads continue.
|
|
|
|
|
*
|
|
|
|
|
* This function must not be called simultaneously with wl_display_dispatch().
|
|
|
|
|
* It may lead to deadlock. If programmer wants, for some reason, use
|
|
|
|
|
* wl_display_dispatch() in one thread and wl_display_prepare_read() with
|
|
|
|
|
* wl_display_read_events() in another, extra care must be taken to serialize
|
|
|
|
|
* these calls, i. e. use mutexes or similar (on whole prepare + read sequence)
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_prepare_read(), wl_display_cancel_read(),
|
|
|
|
|
* wl_display_dispatch_pending(), wl_display_dispatch()
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_read_events(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2014-08-22 14:28:59 +02:00
|
|
|
if (display->last_error) {
|
2014-09-10 12:47:14 +02:00
|
|
|
cancel_read(display);
|
2014-08-22 14:28:59 +02:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
errno = display->last_error;
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
ret = read_events(display);
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
|
dispatch_queue(struct wl_display *display, struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
int count;
|
|
|
|
|
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
goto err;
|
|
|
|
|
|
2014-02-07 16:50:50 -08:00
|
|
|
count = 0;
|
|
|
|
|
while (!wl_list_empty(&display->display_queue.event_list)) {
|
|
|
|
|
dispatch_event(display, &display->display_queue);
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
goto err;
|
|
|
|
|
count++;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
while (!wl_list_empty(&queue->event_list)) {
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
dispatch_event(display, queue);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (display->last_error)
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
goto err;
|
2014-02-07 16:50:50 -08:00
|
|
|
count++;
|
2012-10-11 23:37:53 +02:00
|
|
|
}
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-15 11:38:20 -04:00
|
|
|
return count;
|
2012-10-11 23:37:53 +02:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
err:
|
2012-10-11 23:37:53 +02:00
|
|
|
errno = display->last_error;
|
|
|
|
|
|
|
|
|
|
return -1;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2015-09-25 11:56:46 +08:00
|
|
|
/** Prepare to read events from the display's file descriptor to a queue
|
2015-01-02 18:29:18 -08:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \param queue The event queue to use
|
|
|
|
|
* \return 0 on success or -1 if event queue was not empty
|
|
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* This function (or wl_display_prepare_read()) must be called before reading
|
|
|
|
|
* from the file descriptor using wl_display_read_events(). Calling
|
|
|
|
|
* wl_display_prepare_read_queue() announces the calling thread's intention to
|
|
|
|
|
* read and ensures that until the thread is ready to read and calls
|
|
|
|
|
* wl_display_read_events(), no other thread will read from the file descriptor.
|
|
|
|
|
* This only succeeds if the event queue is empty, and if not -1 is returned and
|
|
|
|
|
* errno set to EAGAIN.
|
2015-01-02 18:29:18 -08:00
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* If a thread successfully calls wl_display_prepare_read_queue(), it must
|
|
|
|
|
* either call wl_display_read_events() when it's ready or cancel the read
|
|
|
|
|
* intention by calling wl_display_cancel_read().
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* Use this function before polling on the display fd or integrate the fd into a
|
|
|
|
|
* toolkit event loop in a race-free way. A correct usage would be (with most
|
|
|
|
|
* error checking left out):
|
2014-12-05 14:00:05 +01:00
|
|
|
*
|
|
|
|
|
* \code
|
2015-09-25 11:56:46 +08:00
|
|
|
* while (wl_display_prepare_read_queue(display, queue) != 0)
|
|
|
|
|
* wl_display_dispatch_queue_pending(display, queue);
|
2014-12-05 14:00:05 +01:00
|
|
|
* wl_display_flush(display);
|
|
|
|
|
*
|
|
|
|
|
* ret = poll(fds, nfds, -1);
|
|
|
|
|
* if (has_error(ret))
|
|
|
|
|
* wl_display_cancel_read(display);
|
|
|
|
|
* else
|
|
|
|
|
* wl_display_read_events(display);
|
|
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* wl_display_dispatch_queue_pending(display, queue);
|
2014-12-05 14:00:05 +01:00
|
|
|
* \endcode
|
|
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* Here we call wl_display_prepare_read_queue(), which ensures that between
|
|
|
|
|
* returning from that call and eventually calling wl_display_read_events(), no
|
|
|
|
|
* other thread will read from the fd and queue events in our queue. If the call
|
|
|
|
|
* to wl_display_prepare_read_queue() fails, we dispatch the pending events and
|
|
|
|
|
* try again until we're successful.
|
2014-12-05 14:00:05 +01:00
|
|
|
*
|
|
|
|
|
* When using wl_display_dispatch() we'd have something like:
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
2014-11-11 18:42:59 -08:00
|
|
|
* \code
|
|
|
|
|
* wl_display_dispatch_pending(display);
|
|
|
|
|
* wl_display_flush(display);
|
|
|
|
|
* poll(fds, nfds, -1);
|
|
|
|
|
* wl_display_dispatch(display);
|
|
|
|
|
* \endcode
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* This sequence in not thread-safe. The race is immediately after poll(),
|
|
|
|
|
* where one thread could preempt and read events before the other thread calls
|
2014-12-03 15:53:16 +01:00
|
|
|
* wl_display_dispatch(). This call now blocks and starves the other
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
* fds in the event loop.
|
|
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* Another race would be when using more event queues.
|
|
|
|
|
* When one thread calls wl_display_dispatch(_queue)(), then it
|
|
|
|
|
* reads all events from display's fd and queues them in appropriate
|
|
|
|
|
* queues. Then it dispatches only its own queue and the other events
|
|
|
|
|
* are sitting in their queues, waiting for dispatching. If that happens
|
|
|
|
|
* before the other thread managed to call poll(), it will
|
|
|
|
|
* block with events queued.
|
|
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* The wl_display_prepare_read_queue() function doesn't acquire exclusive access
|
2014-12-05 14:00:05 +01:00
|
|
|
* to the display's fd. It only registers that the thread calling this function
|
2015-09-25 11:56:46 +08:00
|
|
|
* has intention to read from fd. When all registered readers call
|
|
|
|
|
* wl_display_read_events(), only one (at random) eventually reads and queues
|
|
|
|
|
* the events and the others are sleeping meanwhile. This way we avoid races and
|
|
|
|
|
* still can read from more threads.
|
2014-12-03 15:53:16 +01:00
|
|
|
*
|
2015-09-25 11:56:46 +08:00
|
|
|
* \sa wl_display_cancel_read(), wl_display_read_events(),
|
|
|
|
|
* wl_display_prepare_read()
|
2014-12-05 14:00:05 +01:00
|
|
|
*
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
2015-09-25 11:56:46 +08:00
|
|
|
wl_display_prepare_read_queue(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
if (!wl_list_empty(&queue->event_list)) {
|
|
|
|
|
errno = EAGAIN;
|
|
|
|
|
ret = -1;
|
|
|
|
|
} else {
|
|
|
|
|
display->reader_count++;
|
|
|
|
|
ret = 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare to read events from the display's file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return 0 on success or -1 if event queue was not empty
|
|
|
|
|
*
|
|
|
|
|
* This function does the same thing as wl_display_prepare_read_queue()
|
|
|
|
|
* with the default queue passed as the queue.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_prepare_read_queue
|
|
|
|
|
* \memberof wl_event_queue
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
wl_display_prepare_read(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_prepare_read_queue(display, &display->default_queue);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
}
|
|
|
|
|
|
2014-12-05 14:00:05 +01:00
|
|
|
/** Cancel read intention on display's fd
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* After a thread successfully called wl_display_prepare_read() it must
|
|
|
|
|
* either call wl_display_read_events() or wl_display_cancel_read().
|
|
|
|
|
* If the threads do not follow this rule it will lead to deadlock.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_prepare_read(), wl_display_read_events()
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_display_cancel_read(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2014-09-10 12:47:14 +02:00
|
|
|
cancel_read(display);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Dispatch events in an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \param queue The event queue to dispatch
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* Dispatch all incoming events for objects assigned to the given
|
|
|
|
|
* event queue. On failure -1 is returned and errno set appropriately.
|
|
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* The behaviour of this function is exactly the same as the behaviour of
|
|
|
|
|
* wl_display_dispatch(), but it dispatches events on given queue,
|
|
|
|
|
* not on the default queue.
|
|
|
|
|
*
|
|
|
|
|
* This function blocks if there are no events to dispatch (if there are,
|
|
|
|
|
* it only dispatches these events and returns immediately).
|
|
|
|
|
* When this function returns after blocking, it means that it read events
|
|
|
|
|
* from display's fd and queued them to appropriate queues.
|
|
|
|
|
* If among the incoming events were some events assigned to the given queue,
|
|
|
|
|
* they are dispatched by this moment.
|
|
|
|
|
*
|
|
|
|
|
* \note Since Wayland 1.5 the display has an extra queue
|
|
|
|
|
* for its own events (i. e. delete_id). This queue is dispatched always,
|
|
|
|
|
* no matter what queue we passed as an argument to this function.
|
|
|
|
|
* That means that this function can return non-0 value even when it
|
|
|
|
|
* haven't dispatched any event for the given queue.
|
|
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* This function has the same constrains for using in multi-threaded apps
|
|
|
|
|
* as \ref wl_display_dispatch().
|
|
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* \sa wl_display_dispatch(), wl_display_dispatch_pending(),
|
|
|
|
|
* wl_display_dispatch_queue_pending()
|
2012-10-19 15:30:25 +03:00
|
|
|
*
|
2015-01-02 18:29:18 -08:00
|
|
|
* \memberof wl_event_queue
|
2012-10-12 17:28:57 +03:00
|
|
|
*/
|
2012-10-11 17:15:08 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_queue(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
struct pollfd pfd[2];
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = dispatch_queue(display, queue);
|
|
|
|
|
if (ret == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
if (ret > 0) {
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2014-03-26 13:28:27 -07:00
|
|
|
/* We ignore EPIPE here, so that we try to read events before
|
|
|
|
|
* returning an error. When the compositor sends an error it
|
|
|
|
|
* will close the socket, and if we bail out here we don't get
|
|
|
|
|
* a chance to process the error. */
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
ret = wl_connection_flush(display->connection);
|
2014-03-26 13:28:27 -07:00
|
|
|
if (ret < 0 && errno != EAGAIN && errno != EPIPE) {
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
display->reader_count++;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
pfd[0].fd = display->fd;
|
|
|
|
|
pfd[0].events = POLLIN;
|
2013-12-09 15:49:48 -08:00
|
|
|
do {
|
|
|
|
|
ret = poll(pfd, 1, -1);
|
|
|
|
|
} while (ret == -1 && errno == EINTR);
|
|
|
|
|
|
|
|
|
|
if (ret == -1) {
|
2013-09-25 10:39:12 +01:00
|
|
|
wl_display_cancel_read(display);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2013-09-25 10:39:12 +01:00
|
|
|
}
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
if (read_events(display) == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
|
|
|
|
|
ret = dispatch_queue(display, queue);
|
|
|
|
|
if (ret == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
err_unlock:
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
return -1;
|
2012-10-11 17:15:08 -04:00
|
|
|
}
|
|
|
|
|
|
2012-11-22 15:34:48 +02:00
|
|
|
/** Dispatch pending events in an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \param queue The event queue to dispatch
|
|
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Dispatch all incoming events for objects assigned to the given
|
|
|
|
|
* event queue. On failure -1 is returned and errno set appropriately.
|
2013-08-09 01:47:06 +00:00
|
|
|
* If there are no events queued, this function returns immediately.
|
2012-11-22 15:34:48 +02:00
|
|
|
*
|
2015-01-02 18:29:18 -08:00
|
|
|
* \memberof wl_event_queue
|
2012-11-30 14:05:32 -05:00
|
|
|
* \since 1.0.2
|
2012-11-22 15:34:48 +02:00
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_queue_pending(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
2013-07-29 16:50:44 -07:00
|
|
|
int ret;
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2013-07-29 16:50:44 -07:00
|
|
|
ret = dispatch_queue(display, queue);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
2013-07-29 16:50:44 -07:00
|
|
|
return ret;
|
2012-11-22 15:34:48 +02:00
|
|
|
}
|
|
|
|
|
|
2012-10-19 15:30:25 +03:00
|
|
|
/** Process incoming events
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* Dispatch the display's default event queue.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* If the default event queue is empty, this function blocks until there are
|
2012-10-19 15:30:25 +03:00
|
|
|
* events to be read from the display fd. Events are read and queued on
|
2014-12-03 15:53:16 +01:00
|
|
|
* the appropriate event queues. Finally, events on the default event queue
|
2012-10-19 15:30:25 +03:00
|
|
|
* are dispatched.
|
|
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* In multi-threaded environment, programmer may want to use
|
|
|
|
|
* wl_display_read_events(). However, use of wl_display_read_events()
|
|
|
|
|
* must not be mixed with wl_display_dispatch(). See wl_display_read_events()
|
|
|
|
|
* and wl_display_prepare_read() for more details.
|
|
|
|
|
*
|
2014-12-03 15:53:16 +01:00
|
|
|
* \note It is not possible to check if there are events on the queue
|
|
|
|
|
* or not. For dispatching default queue events without blocking, see \ref
|
2012-10-19 15:30:25 +03:00
|
|
|
* wl_display_dispatch_pending().
|
|
|
|
|
*
|
2014-12-05 14:00:05 +01:00
|
|
|
* \sa wl_display_dispatch_pending(), wl_display_dispatch_queue(),
|
|
|
|
|
* wl_display_read_events()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_dispatch_queue(display, &display->default_queue);
|
2012-10-11 17:15:08 -04:00
|
|
|
}
|
|
|
|
|
|
2014-12-03 15:53:16 +01:00
|
|
|
/** Dispatch default queue events without reading from the display fd
|
2012-10-19 15:30:25 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return The number of dispatched events or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* This function dispatches events on the main event queue. It does not
|
|
|
|
|
* attempt to read the display fd and simply returns zero if the main
|
|
|
|
|
* queue is empty, i.e., it doesn't block.
|
|
|
|
|
*
|
|
|
|
|
* This is necessary when a client's main loop wakes up on some fd other
|
|
|
|
|
* than the display fd (network socket, timer fd, etc) and calls \ref
|
|
|
|
|
* wl_display_dispatch_queue() from that callback. This may queue up
|
2014-12-03 15:53:16 +01:00
|
|
|
* events in other queues while reading all data from the display fd.
|
|
|
|
|
* When the main loop returns from the handler, the display fd
|
2012-10-19 15:30:25 +03:00
|
|
|
* no longer has data, causing a call to \em poll(2) (or similar
|
|
|
|
|
* functions) to block indefinitely, even though there are events ready
|
|
|
|
|
* to dispatch.
|
|
|
|
|
*
|
|
|
|
|
* To proper integrate the wayland display fd into a main loop, the
|
2014-11-11 18:43:02 -08:00
|
|
|
* client should always call wl_display_dispatch_pending() and then
|
2012-10-19 15:30:25 +03:00
|
|
|
* \ref wl_display_flush() prior to going back to sleep. At that point,
|
|
|
|
|
* the fd typically doesn't have data so attempting I/O could block, but
|
2014-12-03 15:53:16 +01:00
|
|
|
* events queued up on the default queue should be dispatched.
|
2012-10-19 15:30:25 +03:00
|
|
|
*
|
|
|
|
|
* A real-world example is a main loop that wakes up on a timerfd (or a
|
|
|
|
|
* sound card fd becoming writable, for example in a video player), which
|
|
|
|
|
* then triggers GL rendering and eventually eglSwapBuffers().
|
|
|
|
|
* eglSwapBuffers() may call wl_display_dispatch_queue() if it didn't
|
|
|
|
|
* receive the frame event for the previous frame, and as such queue
|
2014-12-03 15:53:16 +01:00
|
|
|
* events in the default queue.
|
2012-10-19 15:30:25 +03:00
|
|
|
*
|
|
|
|
|
* \sa wl_display_dispatch(), wl_display_dispatch_queue(),
|
|
|
|
|
* wl_display_flush()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 17:15:08 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_pending(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_dispatch_queue_pending(display,
|
|
|
|
|
&display->default_queue);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2013-08-09 01:47:06 +00:00
|
|
|
/** Retrieve the last error that occurred on a display
|
2012-10-16 17:29:09 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2013-08-09 01:47:06 +00:00
|
|
|
* \return The last error that occurred on \c display or 0 if no error occurred
|
2012-10-17 21:26:09 +03:00
|
|
|
*
|
2013-08-09 01:47:06 +00:00
|
|
|
* Return the last error that occurred on the display. This may be an error sent
|
2012-10-17 21:26:09 +03:00
|
|
|
* by the server or caused by the local client.
|
|
|
|
|
*
|
|
|
|
|
* \note Errors are \b fatal. If this function returns non-zero the display
|
|
|
|
|
* can no longer be used.
|
2012-10-16 17:29:09 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 23:37:42 +02:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_get_error(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = display->last_error;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2015-01-02 18:29:15 -08:00
|
|
|
/** Retrieves the information about a protocol error:
|
2014-06-20 09:29:52 +02:00
|
|
|
*
|
|
|
|
|
* \param display The Wayland display
|
|
|
|
|
* \param interface if not NULL, stores the interface where the error occurred
|
|
|
|
|
* \param id if not NULL, stores the object id that generated
|
|
|
|
|
* the error. There's no guarantee the object is
|
|
|
|
|
* still valid; the client must know if it deleted the object.
|
|
|
|
|
* \return The error code as defined in the interface specification.
|
|
|
|
|
*
|
|
|
|
|
* \code
|
|
|
|
|
* int err = wl_display_get_error(display);
|
|
|
|
|
*
|
|
|
|
|
* if (err == EPROTO) {
|
|
|
|
|
* code = wl_display_get_protocol_error(display, &interface, &id);
|
|
|
|
|
* handle_error(code, interface, id);
|
|
|
|
|
* }
|
|
|
|
|
*
|
|
|
|
|
* ...
|
2015-01-02 18:29:15 -08:00
|
|
|
* \endcode
|
|
|
|
|
* \memberof wl_display
|
2014-06-20 09:29:52 +02:00
|
|
|
*/
|
|
|
|
|
WL_EXPORT uint32_t
|
|
|
|
|
wl_display_get_protocol_error(struct wl_display *display,
|
|
|
|
|
const struct wl_interface **interface,
|
|
|
|
|
uint32_t *id)
|
|
|
|
|
{
|
|
|
|
|
uint32_t ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = display->protocol_error.code;
|
|
|
|
|
|
|
|
|
|
if (interface)
|
|
|
|
|
*interface = display->protocol_error.interface;
|
|
|
|
|
if (id)
|
|
|
|
|
*id = display->protocol_error.id;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
2013-04-02 21:31:02 -04:00
|
|
|
/** Send all buffered requests on the display to the server
|
2012-10-16 17:29:06 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2013-08-09 01:47:06 +00:00
|
|
|
* \return The number of bytes sent on success or -1 on failure
|
2012-10-16 17:29:06 +03:00
|
|
|
*
|
2015-10-02 17:32:54 +08:00
|
|
|
* Send all buffered data on the client side to the server. Clients should
|
|
|
|
|
* always call this function before blocking on input from the display fd.
|
|
|
|
|
* On success, the number of bytes sent to the server is returned. On
|
|
|
|
|
* failure, this function returns -1 and errno is set appropriately.
|
2012-10-16 17:29:06 +03:00
|
|
|
*
|
2013-04-02 21:31:02 -04:00
|
|
|
* wl_display_flush() never blocks. It will write as much data as
|
|
|
|
|
* possible, but if all data could not be written, errno will be set
|
|
|
|
|
* to EAGAIN and -1 returned. In that case, use poll on the display
|
|
|
|
|
* file descriptor to wait for it to become writable again.
|
|
|
|
|
*
|
2012-10-16 17:29:06 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-04 16:54:22 -04:00
|
|
|
WL_EXPORT int
|
2011-05-10 17:51:52 +01:00
|
|
|
wl_display_flush(struct wl_display *display)
|
|
|
|
|
{
|
2012-10-04 17:42:49 -04:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
if (display->last_error) {
|
|
|
|
|
errno = display->last_error;
|
|
|
|
|
ret = -1;
|
|
|
|
|
} else {
|
|
|
|
|
ret = wl_connection_flush(display->connection);
|
|
|
|
|
if (ret < 0 && errno != EAGAIN)
|
|
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
}
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
2011-05-10 17:51:52 +01:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Set the user data associated with a proxy
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param user_data The data to be associated with proxy
|
|
|
|
|
*
|
2012-10-15 17:53:23 +03:00
|
|
|
* Set the user data associated with \c proxy. When events for this
|
|
|
|
|
* proxy are received, \c user_data will be supplied to its listener.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2009-09-18 09:49:21 -04:00
|
|
|
WL_EXPORT void
|
2010-08-17 21:23:10 -04:00
|
|
|
wl_proxy_set_user_data(struct wl_proxy *proxy, void *user_data)
|
2009-09-18 09:49:21 -04:00
|
|
|
{
|
2010-08-10 14:02:48 -04:00
|
|
|
proxy->user_data = user_data;
|
2009-09-18 09:49:21 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get the user data associated with a proxy
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The user data associated with proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2009-09-18 09:49:21 -04:00
|
|
|
WL_EXPORT void *
|
2010-08-17 21:23:10 -04:00
|
|
|
wl_proxy_get_user_data(struct wl_proxy *proxy)
|
2009-09-18 09:49:21 -04:00
|
|
|
{
|
2010-08-10 14:02:48 -04:00
|
|
|
return proxy->user_data;
|
2009-09-18 09:49:21 -04:00
|
|
|
}
|
2012-04-27 11:31:07 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get the id of a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The id the object associated with the proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2012-04-27 11:31:07 -04:00
|
|
|
WL_EXPORT uint32_t
|
|
|
|
|
wl_proxy_get_id(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.id;
|
|
|
|
|
}
|
2012-05-29 17:37:02 +02:00
|
|
|
|
2013-02-26 15:19:44 +02:00
|
|
|
/** Get the interface name (class) of a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The interface name of the object associated with the proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT const char *
|
|
|
|
|
wl_proxy_get_class(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.interface->name;
|
|
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Assign a proxy to an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
2014-12-03 15:53:16 +01:00
|
|
|
* \param queue The event queue that will handle this proxy or NULL
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
2012-10-15 17:53:23 +03:00
|
|
|
* Assign proxy to event queue. Events coming from \c proxy will be
|
2014-12-03 15:53:16 +01:00
|
|
|
* queued in \c queue from now. If queue is NULL, then the display's
|
|
|
|
|
* default queue is set to the proxy.
|
|
|
|
|
*
|
|
|
|
|
* \note By default, the queue set in proxy is the one inherited from parent.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \sa wl_display_dispatch_queue()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_set_queue(struct wl_proxy *proxy, struct wl_event_queue *queue)
|
|
|
|
|
{
|
2013-12-04 16:58:39 -08:00
|
|
|
if (queue)
|
|
|
|
|
proxy->queue = queue;
|
|
|
|
|
else
|
2014-02-07 16:00:21 -08:00
|
|
|
proxy->queue = &proxy->display->default_queue;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2012-05-29 17:37:02 +02:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_log_set_handler_client(wl_log_func_t handler)
|
|
|
|
|
{
|
|
|
|
|
wl_log_handler = handler;
|
|
|
|
|
}
|