2008-12-02 15:15:01 -05:00
|
|
|
/*
|
2012-10-04 16:54:22 -04:00
|
|
|
* Copyright © 2008-2012 Kristian Høgsberg
|
|
|
|
|
* Copyright © 2010-2012 Intel Corporation
|
2008-12-02 15:15:01 -05:00
|
|
|
*
|
|
|
|
|
* Permission to use, copy, modify, distribute, and sell this software and its
|
|
|
|
|
* documentation for any purpose is hereby granted without fee, provided that
|
|
|
|
|
* the above copyright notice appear in all copies and that both that copyright
|
|
|
|
|
* notice and this permission notice appear in supporting documentation, and
|
|
|
|
|
* that the name of the copyright holders not be used in advertising or
|
|
|
|
|
* publicity pertaining to distribution of the software without specific,
|
|
|
|
|
* written prior permission. The copyright holders make no representations
|
|
|
|
|
* about the suitability of this software for any purpose. It is provided "as
|
|
|
|
|
* is" without express or implied warranty.
|
|
|
|
|
*
|
|
|
|
|
* THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
|
|
|
|
* INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
|
|
|
|
|
* EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
|
|
|
|
* CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
|
|
|
|
|
* DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
|
|
|
|
|
* TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
|
|
|
|
|
* OF THIS SOFTWARE.
|
|
|
|
|
*/
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
#define _GNU_SOURCE
|
|
|
|
|
|
2008-10-07 10:10:36 -04:00
|
|
|
#include <stdlib.h>
|
|
|
|
|
#include <stdint.h>
|
|
|
|
|
#include <stddef.h>
|
|
|
|
|
#include <stdio.h>
|
2011-07-14 18:56:40 +03:00
|
|
|
#include <stdbool.h>
|
2008-10-07 10:10:36 -04:00
|
|
|
#include <errno.h>
|
|
|
|
|
#include <string.h>
|
|
|
|
|
#include <unistd.h>
|
|
|
|
|
#include <sys/socket.h>
|
|
|
|
|
#include <sys/un.h>
|
|
|
|
|
#include <ctype.h>
|
2008-12-21 21:50:23 -05:00
|
|
|
#include <assert.h>
|
2011-04-11 09:24:11 -04:00
|
|
|
#include <fcntl.h>
|
2014-01-14 18:38:59 +01:00
|
|
|
#include <poll.h>
|
2012-10-04 17:42:49 -04:00
|
|
|
#include <pthread.h>
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2008-11-23 23:41:08 -05:00
|
|
|
#include "wayland-util.h"
|
2012-03-21 11:11:26 +02:00
|
|
|
#include "wayland-os.h"
|
2008-10-08 13:32:07 -04:00
|
|
|
#include "wayland-client.h"
|
2011-11-18 13:46:56 -05:00
|
|
|
#include "wayland-private.h"
|
2008-10-08 13:32:07 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** \cond */
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
enum wl_proxy_flag {
|
|
|
|
|
WL_PROXY_FLAG_ID_DELETED = (1 << 0),
|
|
|
|
|
WL_PROXY_FLAG_DESTROYED = (1 << 1)
|
|
|
|
|
};
|
|
|
|
|
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_proxy {
|
2010-12-01 17:07:41 -05:00
|
|
|
struct wl_object object;
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_display *display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_event_queue *queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
uint32_t flags;
|
|
|
|
|
int refcount;
|
2009-09-18 09:49:21 -04:00
|
|
|
void *user_data;
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_dispatcher_func_t dispatcher;
|
2008-10-07 10:10:36 -04:00
|
|
|
};
|
|
|
|
|
|
2011-04-14 10:38:44 -04:00
|
|
|
struct wl_global {
|
|
|
|
|
uint32_t id;
|
|
|
|
|
char *interface;
|
|
|
|
|
uint32_t version;
|
|
|
|
|
struct wl_list link;
|
|
|
|
|
};
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_event_queue {
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_list link;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_list event_list;
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_display *display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
pthread_cond_t cond;
|
|
|
|
|
};
|
|
|
|
|
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_display {
|
|
|
|
|
struct wl_proxy proxy;
|
|
|
|
|
struct wl_connection *connection;
|
2012-10-11 23:37:42 +02:00
|
|
|
int last_error;
|
2008-12-30 11:03:33 -05:00
|
|
|
int fd;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
pthread_t display_thread;
|
2011-08-19 22:50:53 -04:00
|
|
|
struct wl_map objects;
|
2014-02-07 16:50:50 -08:00
|
|
|
struct wl_event_queue display_queue;
|
2014-02-07 16:00:21 -08:00
|
|
|
struct wl_event_queue default_queue;
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_list event_queue_list;
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_t mutex;
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
int reader_count;
|
|
|
|
|
uint32_t read_serial;
|
|
|
|
|
pthread_cond_t reader_cond;
|
2008-12-30 11:03:33 -05:00
|
|
|
};
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** \endcond */
|
|
|
|
|
|
2013-12-18 20:56:18 -06:00
|
|
|
static int debug_client = 0;
|
2011-02-10 12:27:35 -05:00
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
static void
|
|
|
|
|
display_fatal_error(struct wl_display *display, int error)
|
|
|
|
|
{
|
|
|
|
|
struct wl_event_queue *iter;
|
|
|
|
|
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (!error)
|
|
|
|
|
error = 1;
|
|
|
|
|
|
|
|
|
|
display->last_error = error;
|
|
|
|
|
|
|
|
|
|
wl_list_for_each(iter, &display->event_queue_list, link)
|
|
|
|
|
pthread_cond_broadcast(&iter->cond);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
wl_display_fatal_error(struct wl_display *display, int error)
|
|
|
|
|
{
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
display_fatal_error(display, error);
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
}
|
|
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
static void
|
2012-10-11 23:37:51 +02:00
|
|
|
wl_event_queue_init(struct wl_event_queue *queue, struct wl_display *display)
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
{
|
|
|
|
|
wl_list_init(&queue->event_list);
|
|
|
|
|
pthread_cond_init(&queue->cond, NULL);
|
2012-10-11 23:37:51 +02:00
|
|
|
queue->display = display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
wl_event_queue_release(struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
struct wl_closure *closure;
|
|
|
|
|
|
|
|
|
|
while (!wl_list_empty(&queue->event_list)) {
|
|
|
|
|
closure = container_of(queue->event_list.next,
|
|
|
|
|
struct wl_closure, link);
|
|
|
|
|
wl_list_remove(&closure->link);
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
}
|
|
|
|
|
pthread_cond_destroy(&queue->cond);
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Destroy an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param queue The event queue to be destroyed
|
|
|
|
|
*
|
|
|
|
|
* Destroy the given event queue. Any pending event on that queue is
|
|
|
|
|
* discarded.
|
|
|
|
|
*
|
2012-10-16 17:29:07 +03:00
|
|
|
* The \ref wl_display object used to create the queue should not be
|
|
|
|
|
* destroyed until all event queues created with it are destroyed with
|
|
|
|
|
* this function.
|
|
|
|
|
*
|
2012-10-12 17:28:57 +03:00
|
|
|
* \memberof wl_event_queue
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_event_queue_destroy(struct wl_event_queue *queue)
|
|
|
|
|
{
|
2012-10-11 23:37:51 +02:00
|
|
|
struct wl_display *display = queue->display;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
wl_list_remove(&queue->link);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
wl_event_queue_release(queue);
|
|
|
|
|
free(queue);
|
2012-10-11 23:37:51 +02:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Create a new event queue for this display
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2012-10-15 17:53:23 +03:00
|
|
|
* \return A new event queue associated with this display or NULL on
|
2012-10-12 17:28:57 +03:00
|
|
|
* failure.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT struct wl_event_queue *
|
|
|
|
|
wl_display_create_queue(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
struct wl_event_queue *queue;
|
|
|
|
|
|
|
|
|
|
queue = malloc(sizeof *queue);
|
|
|
|
|
if (queue == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
2012-10-11 23:37:51 +02:00
|
|
|
wl_event_queue_init(queue, display);
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
wl_list_insert(&display->event_queue_list, &queue->link);
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
|
|
|
|
return queue;
|
|
|
|
|
}
|
|
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
static struct wl_proxy *
|
|
|
|
|
proxy_create(struct wl_proxy *factory, const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
struct wl_display *display = factory->display;
|
|
|
|
|
|
|
|
|
|
proxy = malloc(sizeof *proxy);
|
|
|
|
|
if (proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
proxy->object.interface = interface;
|
|
|
|
|
proxy->object.implementation = NULL;
|
|
|
|
|
proxy->dispatcher = NULL;
|
|
|
|
|
proxy->display = display;
|
|
|
|
|
proxy->queue = factory->queue;
|
|
|
|
|
proxy->flags = 0;
|
|
|
|
|
proxy->refcount = 1;
|
|
|
|
|
|
|
|
|
|
proxy->object.id = wl_map_insert_new(&display->objects, 0, proxy);
|
|
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:53:23 +03:00
|
|
|
/** Create a proxy object with a given interface
|
|
|
|
|
*
|
|
|
|
|
* \param factory Factory proxy object
|
|
|
|
|
* \param interface Interface the proxy object should use
|
|
|
|
|
* \return A newly allocated proxy object or NULL on failure
|
|
|
|
|
*
|
|
|
|
|
* This function creates a new proxy object with the supplied interface. The
|
|
|
|
|
* proxy object will have an id assigned from the client id space. The id
|
|
|
|
|
* should be created on the compositor side by sending an appropriate request
|
|
|
|
|
* with \ref wl_proxy_marshal().
|
|
|
|
|
*
|
|
|
|
|
* The proxy will inherit the display and event queue of the factory object.
|
|
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display, wl_event_queue, wl_proxy_marshal()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-10 14:02:48 -04:00
|
|
|
WL_EXPORT struct wl_proxy *
|
2011-08-19 13:44:01 -04:00
|
|
|
wl_proxy_create(struct wl_proxy *factory, const struct wl_interface *interface)
|
2008-12-30 11:03:33 -05:00
|
|
|
{
|
2011-08-19 13:44:01 -04:00
|
|
|
struct wl_display *display = factory->display;
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
struct wl_proxy *proxy;
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
proxy = proxy_create(factory, interface);
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2011-11-18 21:59:36 -05:00
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-11 14:55:59 +03:00
|
|
|
/* The caller should hold the display lock */
|
|
|
|
|
static struct wl_proxy *
|
2011-11-18 21:59:36 -05:00
|
|
|
wl_proxy_create_for_id(struct wl_proxy *factory,
|
|
|
|
|
uint32_t id, const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
struct wl_display *display = factory->display;
|
|
|
|
|
|
|
|
|
|
proxy = malloc(sizeof *proxy);
|
|
|
|
|
if (proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
proxy->object.interface = interface;
|
|
|
|
|
proxy->object.implementation = NULL;
|
2011-11-15 08:58:34 -05:00
|
|
|
proxy->object.id = id;
|
2013-07-17 21:58:47 -05:00
|
|
|
proxy->dispatcher = NULL;
|
2008-12-30 11:03:33 -05:00
|
|
|
proxy->display = display;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
proxy->queue = factory->queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->flags = 0;
|
|
|
|
|
proxy->refcount = 1;
|
2012-10-04 17:42:49 -04:00
|
|
|
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&display->objects, 0, id, proxy);
|
2008-12-30 11:03:33 -05:00
|
|
|
|
|
|
|
|
return proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Destroy a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy to be destroyed
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-09-02 20:22:42 -04:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_destroy(struct wl_proxy *proxy)
|
|
|
|
|
{
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
struct wl_display *display = proxy->display;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
2012-10-04 17:42:49 -04:00
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (proxy->flags & WL_PROXY_FLAG_ID_DELETED)
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
wl_map_remove(&proxy->display->objects, proxy->object.id);
|
|
|
|
|
else if (proxy->object.id < WL_SERVER_ID_START)
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&proxy->display->objects, 0,
|
2011-11-18 21:59:36 -05:00
|
|
|
proxy->object.id, WL_ZOMBIE_OBJECT);
|
|
|
|
|
else
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_at(&proxy->display->objects, 0,
|
2011-11-18 21:59:36 -05:00
|
|
|
proxy->object.id, NULL);
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->flags |= WL_PROXY_FLAG_DESTROYED;
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2010-09-02 20:22:42 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Set a proxy's listener
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param implementation The listener to be added to proxy
|
|
|
|
|
* \param data User data to be associated with the proxy
|
|
|
|
|
* \return 0 on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Set proxy's listener to \c implementation and its user data to
|
2012-11-22 18:09:32 -02:00
|
|
|
* \c data. If a listener has already been set, this function
|
2012-10-12 17:28:57 +03:00
|
|
|
* fails and nothing is changed.
|
|
|
|
|
*
|
|
|
|
|
* \c implementation is a vector of function pointers. For an opcode
|
2012-11-22 18:09:32 -02:00
|
|
|
* \c n, \c implementation[n] should point to the handler of \c n for
|
2012-10-12 17:28:57 +03:00
|
|
|
* the given object.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-10 10:53:44 -04:00
|
|
|
WL_EXPORT int
|
2010-08-10 14:02:48 -04:00
|
|
|
wl_proxy_add_listener(struct wl_proxy *proxy,
|
|
|
|
|
void (**implementation)(void), void *data)
|
2008-12-21 23:37:12 -05:00
|
|
|
{
|
2013-07-17 21:58:47 -05:00
|
|
|
if (proxy->object.implementation || proxy->dispatcher) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("proxy %p already has listener\n", proxy);
|
2008-12-30 11:03:33 -05:00
|
|
|
return -1;
|
2011-02-18 15:28:54 -05:00
|
|
|
}
|
2008-12-30 11:03:33 -05:00
|
|
|
|
2011-02-18 15:28:54 -05:00
|
|
|
proxy->object.implementation = implementation;
|
|
|
|
|
proxy->user_data = data;
|
2008-12-30 11:03:33 -05:00
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2013-07-22 17:30:52 +01:00
|
|
|
/** Get a proxy's listener
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The address of the proxy's listener or NULL if no listener is set
|
|
|
|
|
*
|
|
|
|
|
* Gets the address to the proxy's listener; which is the listener set with
|
|
|
|
|
* \ref wl_proxy_add_listener.
|
|
|
|
|
*
|
|
|
|
|
* This function is useful in client with multiple listeners on the same
|
|
|
|
|
* interface to allow the identification of which code to eexecute.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT const void *
|
|
|
|
|
wl_proxy_get_listener(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.implementation;
|
|
|
|
|
}
|
|
|
|
|
|
2013-07-17 21:58:47 -05:00
|
|
|
/** Set a proxy's listener (with dispatcher)
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param dispatcher The dispatcher to be used for this proxy
|
|
|
|
|
* \param implementation The dispatcher-specific listener implementation
|
|
|
|
|
* \param data User data to be associated with the proxy
|
|
|
|
|
* \return 0 on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Set proxy's listener to use \c dispatcher_func as its dispatcher and \c
|
|
|
|
|
* dispatcher_data as its dispatcher-specific implementation and its user data
|
|
|
|
|
* to \c data. If a listener has already been set, this function
|
|
|
|
|
* fails and nothing is changed.
|
|
|
|
|
*
|
|
|
|
|
* The exact details of dispatcher_data depend on the dispatcher used. This
|
|
|
|
|
* function is intended to be used by language bindings, not user code.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_proxy_add_dispatcher(struct wl_proxy *proxy,
|
|
|
|
|
wl_dispatcher_func_t dispatcher,
|
|
|
|
|
const void *implementation, void *data)
|
|
|
|
|
{
|
|
|
|
|
if (proxy->object.implementation || proxy->dispatcher) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("proxy %p already has listener\n");
|
2013-07-17 21:58:47 -05:00
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
proxy->object.implementation = implementation;
|
|
|
|
|
proxy->dispatcher = dispatcher;
|
|
|
|
|
proxy->user_data = data;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
static struct wl_proxy *
|
|
|
|
|
create_outgoing_proxy(struct wl_proxy *proxy, const struct wl_message *message,
|
|
|
|
|
union wl_argument *args,
|
|
|
|
|
const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
int i, count;
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
struct wl_proxy *new_proxy = NULL;
|
|
|
|
|
|
|
|
|
|
signature = message->signature;
|
|
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
new_proxy = proxy_create(proxy, interface);
|
|
|
|
|
if (new_proxy == NULL)
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
args[i].o = &new_proxy->object;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return new_proxy;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:53:23 +03:00
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param args Extra arguments for the given request
|
2013-12-09 22:35:22 +01:00
|
|
|
* \param interface The interface to use for the new proxy
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
|
|
|
|
* Translates the request given by opcode and the extra arguments into the
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* wire format and write it to the connection buffer. This version takes an
|
|
|
|
|
* array of the union type wl_argument.
|
|
|
|
|
*
|
|
|
|
|
* For new-id arguments, this function will allocate a new wl_proxy
|
|
|
|
|
* and send the ID to the server. The new wl_proxy will be returned
|
|
|
|
|
* on success or NULL on errror with errno set accordingly.
|
|
|
|
|
*
|
|
|
|
|
* \note This is intended to be used by language bindings and not in
|
|
|
|
|
* non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_marshal()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT struct wl_proxy *
|
|
|
|
|
wl_proxy_marshal_array_constructor(struct wl_proxy *proxy,
|
|
|
|
|
uint32_t opcode, union wl_argument *args,
|
|
|
|
|
const struct wl_interface *interface)
|
|
|
|
|
{
|
|
|
|
|
struct wl_closure *closure;
|
|
|
|
|
struct wl_proxy *new_proxy = NULL;
|
|
|
|
|
const struct wl_message *message;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&proxy->display->mutex);
|
|
|
|
|
|
|
|
|
|
message = &proxy->object.interface->methods[opcode];
|
|
|
|
|
if (interface) {
|
|
|
|
|
new_proxy = create_outgoing_proxy(proxy, message,
|
|
|
|
|
args, interface);
|
|
|
|
|
if (new_proxy == NULL)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
closure = wl_closure_marshal(&proxy->object, opcode, args, message);
|
|
|
|
|
if (closure == NULL) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("Error marshalling request: %m\n");
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
abort();
|
|
|
|
|
}
|
|
|
|
|
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_closure_print(closure, &proxy->object, true);
|
|
|
|
|
|
|
|
|
|
if (wl_closure_send(closure, proxy->display->connection)) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("Error sending request: %m\n");
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
abort();
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
|
|
|
|
|
err_unlock:
|
|
|
|
|
pthread_mutex_unlock(&proxy->display->mutex);
|
|
|
|
|
|
|
|
|
|
return new_proxy;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
|
|
|
|
* \param ... Extra arguments for the given request
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* This function is similar to wl_proxy_marshal_constructor(), except
|
|
|
|
|
* it doesn't create proxies for new-id arguments.
|
2012-10-15 17:53:23 +03:00
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_create()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2010-08-09 21:25:50 -04:00
|
|
|
WL_EXPORT void
|
2008-12-30 11:03:33 -05:00
|
|
|
wl_proxy_marshal(struct wl_proxy *proxy, uint32_t opcode, ...)
|
|
|
|
|
{
|
2013-07-17 21:58:47 -05:00
|
|
|
union wl_argument args[WL_CLOSURE_MAX_ARGS];
|
2008-12-30 11:03:33 -05:00
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
|
|
va_start(ap, opcode);
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_argument_from_va_list(proxy->object.interface->methods[opcode].signature,
|
|
|
|
|
args, WL_CLOSURE_MAX_ARGS, ap);
|
2008-12-30 11:03:33 -05:00
|
|
|
va_end(ap);
|
2010-09-07 21:34:45 -04:00
|
|
|
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_proxy_marshal_array_constructor(proxy, opcode, args, NULL);
|
2013-07-17 21:58:47 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
2013-12-09 22:35:22 +01:00
|
|
|
* \param interface The interface to use for the new proxy
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* \param ... Extra arguments for the given request
|
|
|
|
|
* \return A new wl_proxy for the new_id argument or NULL on error
|
2013-07-17 21:58:47 -05:00
|
|
|
*
|
|
|
|
|
* Translates the request given by opcode and the extra arguments into the
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
* wire format and write it to the connection buffer.
|
|
|
|
|
*
|
|
|
|
|
* For new-id arguments, this function will allocate a new wl_proxy
|
|
|
|
|
* and send the ID to the server. The new wl_proxy will be returned
|
|
|
|
|
* on success or NULL on errror with errno set accordingly.
|
|
|
|
|
*
|
|
|
|
|
* \note This should not normally be used by non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT struct wl_proxy *
|
|
|
|
|
wl_proxy_marshal_constructor(struct wl_proxy *proxy, uint32_t opcode,
|
|
|
|
|
const struct wl_interface *interface, ...)
|
|
|
|
|
{
|
|
|
|
|
union wl_argument args[WL_CLOSURE_MAX_ARGS];
|
|
|
|
|
va_list ap;
|
|
|
|
|
|
|
|
|
|
va_start(ap, interface);
|
|
|
|
|
wl_argument_from_va_list(proxy->object.interface->methods[opcode].signature,
|
|
|
|
|
args, WL_CLOSURE_MAX_ARGS, ap);
|
|
|
|
|
va_end(ap);
|
|
|
|
|
|
|
|
|
|
return wl_proxy_marshal_array_constructor(proxy, opcode,
|
|
|
|
|
args, interface);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare a request to be sent to the compositor
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param opcode Opcode of the request to be sent
|
|
|
|
|
* \param args Extra arguments for the given request
|
|
|
|
|
*
|
|
|
|
|
* This function is similar to wl_proxy_marshal_array_constructor(), except
|
|
|
|
|
* it doesn't create proxies for new-id arguments.
|
2013-07-17 21:58:47 -05:00
|
|
|
*
|
|
|
|
|
* \note This is intended to be used by language bindings and not in
|
|
|
|
|
* non-generated code.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_proxy_marshal()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_marshal_array(struct wl_proxy *proxy, uint32_t opcode,
|
|
|
|
|
union wl_argument *args)
|
|
|
|
|
{
|
client: Introduce functions to allocate and marshal proxies atomically
The server requires clients to only allocate one ID ahead of the previously
highest ID in order to keep the ID range tight. Failure to do so will
make the server close the client connection. However, the way we allocate
new IDs is racy. The generated code looks like:
new_proxy = wl_proxy_create(...);
wl_proxy_marshal(proxy, ... new_proxy, ...);
If two threads do this at the same time, there's a chance that thread A
will allocate a proxy, then get pre-empted by thread B which then allocates
a proxy and then passes it to wl_proxy_marshal(). The ID for thread As
proxy will be one higher that the currently highest ID, but the ID for
thread Bs proxy will be two higher. But since thread B prempted thread A
before it could send its new ID, B will send its new ID first, the server
will see the ID from thread Bs proxy first, and will reject it.
We fix this by introducing wl_proxy_marshal_constructor(). This
function is identical to wl_proxy_marshal(), except that it will
allocate a wl_proxy for NEW_ID arguments and send it, all under the
display mutex. By introducing a new function, we maintain backwards
compatibility with older code from the generator, and make sure that
the new generated code has an explicit dependency on a new enough
libwayland-client.so.
A virtual Wayland merit badge goes to Kalle Vahlman, who tracked this
down and analyzed the issue.
Reported-by: Kalle Vahlman <kalle.vahlman@movial.com>
2013-11-14 21:29:06 -08:00
|
|
|
wl_proxy_marshal_array_constructor(proxy, opcode, args, NULL);
|
2008-12-30 11:03:33 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-24 19:30:25 -05:00
|
|
|
static void
|
2011-05-11 10:57:06 -04:00
|
|
|
display_handle_error(void *data,
|
2013-06-27 20:09:18 -05:00
|
|
|
struct wl_display *display, void *object,
|
2011-05-11 10:57:06 -04:00
|
|
|
uint32_t code, const char *message)
|
2008-12-24 19:30:25 -05:00
|
|
|
{
|
2013-06-27 20:09:18 -05:00
|
|
|
struct wl_proxy *proxy = object;
|
2012-10-11 23:37:42 +02:00
|
|
|
int err;
|
|
|
|
|
|
|
|
|
|
wl_log("%s@%u: error %d: %s\n",
|
2013-06-27 20:09:18 -05:00
|
|
|
proxy->object.interface->name, proxy->object.id, code, message);
|
2012-10-11 23:37:42 +02:00
|
|
|
|
|
|
|
|
switch (code) {
|
|
|
|
|
case WL_DISPLAY_ERROR_INVALID_OBJECT:
|
|
|
|
|
case WL_DISPLAY_ERROR_INVALID_METHOD:
|
2014-02-17 17:30:41 -05:00
|
|
|
err = EINVAL;
|
2012-10-11 23:37:42 +02:00
|
|
|
break;
|
|
|
|
|
case WL_DISPLAY_ERROR_NO_MEMORY:
|
2014-02-17 17:30:41 -05:00
|
|
|
err = ENOMEM;
|
2012-10-11 23:37:42 +02:00
|
|
|
break;
|
|
|
|
|
default:
|
2014-02-17 17:30:41 -05:00
|
|
|
err = EFAULT;
|
2012-10-11 23:37:42 +02:00
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
wl_display_fatal_error(display, err);
|
2008-12-24 19:30:25 -05:00
|
|
|
}
|
|
|
|
|
|
2011-11-15 22:20:28 -05:00
|
|
|
static void
|
|
|
|
|
display_handle_delete_id(void *data, struct wl_display *display, uint32_t id)
|
|
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
2012-10-04 17:42:49 -04:00
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2011-11-15 22:20:28 -05:00
|
|
|
proxy = wl_map_lookup(&display->objects, id);
|
2013-04-04 17:26:57 +01:00
|
|
|
|
|
|
|
|
if (!proxy)
|
|
|
|
|
wl_log("error: received delete_id for unknown id (%u)\n", id);
|
|
|
|
|
|
|
|
|
|
if (proxy && proxy != WL_ZOMBIE_OBJECT)
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
proxy->flags |= WL_PROXY_FLAG_ID_DELETED;
|
2011-11-15 22:20:28 -05:00
|
|
|
else
|
|
|
|
|
wl_map_remove(&display->objects, id);
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2011-11-15 22:20:28 -05:00
|
|
|
}
|
|
|
|
|
|
2008-12-24 19:30:25 -05:00
|
|
|
static const struct wl_display_listener display_listener = {
|
2011-05-11 10:57:06 -04:00
|
|
|
display_handle_error,
|
2011-11-15 22:20:28 -05:00
|
|
|
display_handle_delete_id
|
2008-12-24 19:30:25 -05:00
|
|
|
};
|
|
|
|
|
|
2011-04-11 09:14:43 -04:00
|
|
|
static int
|
2012-08-14 13:16:10 -04:00
|
|
|
connect_to_socket(const char *name)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-12-07 15:22:22 -05:00
|
|
|
struct sockaddr_un addr;
|
2008-10-07 10:10:36 -04:00
|
|
|
socklen_t size;
|
2010-12-01 15:36:20 -05:00
|
|
|
const char *runtime_dir;
|
2012-08-14 13:16:10 -04:00
|
|
|
int name_size, fd;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2010-12-01 15:36:20 -05:00
|
|
|
runtime_dir = getenv("XDG_RUNTIME_DIR");
|
2012-06-06 14:30:18 +03:00
|
|
|
if (!runtime_dir) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("error: XDG_RUNTIME_DIR not set in the environment.\n");
|
2012-06-06 14:30:18 +03:00
|
|
|
/* to prevent programs reporting
|
|
|
|
|
* "failed to create display: Success" */
|
|
|
|
|
errno = ENOENT;
|
|
|
|
|
return -1;
|
2010-12-01 15:36:20 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (name == NULL)
|
|
|
|
|
name = getenv("WAYLAND_DISPLAY");
|
|
|
|
|
if (name == NULL)
|
|
|
|
|
name = "wayland-0";
|
|
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
fd = wl_os_socket_cloexec(PF_LOCAL, SOCK_STREAM, 0);
|
|
|
|
|
if (fd < 0)
|
2012-06-06 14:30:18 +03:00
|
|
|
return -1;
|
|
|
|
|
|
2010-12-01 15:36:20 -05:00
|
|
|
memset(&addr, 0, sizeof addr);
|
2008-12-07 15:22:22 -05:00
|
|
|
addr.sun_family = AF_LOCAL;
|
2010-12-01 15:36:20 -05:00
|
|
|
name_size =
|
|
|
|
|
snprintf(addr.sun_path, sizeof addr.sun_path,
|
|
|
|
|
"%s/%s", runtime_dir, name) + 1;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-06-15 21:39:50 +00:00
|
|
|
assert(name_size > 0);
|
|
|
|
|
if (name_size > (int)sizeof addr.sun_path) {
|
2014-04-30 12:18:52 -07:00
|
|
|
wl_log("error: socket path \"%s/%s\" plus null terminator"
|
2012-06-15 21:39:50 +00:00
|
|
|
" exceeds 108 bytes\n", runtime_dir, name);
|
2012-08-14 13:16:10 -04:00
|
|
|
close(fd);
|
2012-06-15 21:39:50 +00:00
|
|
|
/* to prevent programs reporting
|
|
|
|
|
* "failed to add socket: Success" */
|
|
|
|
|
errno = ENAMETOOLONG;
|
|
|
|
|
return -1;
|
|
|
|
|
};
|
|
|
|
|
|
2008-12-07 15:22:22 -05:00
|
|
|
size = offsetof (struct sockaddr_un, sun_path) + name_size;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
if (connect(fd, (struct sockaddr *) &addr, size) < 0) {
|
|
|
|
|
close(fd);
|
2011-04-11 09:14:43 -04:00
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
return fd;
|
2011-04-11 09:14:43 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Connect to Wayland display on an already open fd
|
|
|
|
|
*
|
|
|
|
|
* \param fd The fd to use for the connection
|
|
|
|
|
* \return A \ref wl_display object or \c NULL on failure
|
|
|
|
|
*
|
2012-10-15 17:50:36 -04:00
|
|
|
* The wl_display takes ownership of the fd and will close it when the
|
|
|
|
|
* display is destroyed. The fd will also be closed in case of
|
|
|
|
|
* failure.
|
|
|
|
|
*
|
2012-10-12 17:28:57 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2011-04-11 09:14:43 -04:00
|
|
|
WL_EXPORT struct wl_display *
|
2012-08-14 13:16:10 -04:00
|
|
|
wl_display_connect_to_fd(int fd)
|
2011-04-11 09:14:43 -04:00
|
|
|
{
|
|
|
|
|
struct wl_display *display;
|
|
|
|
|
const char *debug;
|
|
|
|
|
|
|
|
|
|
debug = getenv("WAYLAND_DEBUG");
|
2012-11-21 17:14:55 -05:00
|
|
|
if (debug && (strstr(debug, "client") || strstr(debug, "1")))
|
2013-12-18 20:56:18 -06:00
|
|
|
debug_client = 1;
|
2011-04-11 09:14:43 -04:00
|
|
|
|
|
|
|
|
display = malloc(sizeof *display);
|
2012-10-15 17:50:36 -04:00
|
|
|
if (display == NULL) {
|
|
|
|
|
close(fd);
|
2011-04-11 09:14:43 -04:00
|
|
|
return NULL;
|
2012-10-15 17:50:36 -04:00
|
|
|
}
|
2011-04-11 09:14:43 -04:00
|
|
|
|
|
|
|
|
memset(display, 0, sizeof *display);
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-08-14 13:16:10 -04:00
|
|
|
display->fd = fd;
|
2013-06-01 17:40:52 -05:00
|
|
|
wl_map_init(&display->objects, WL_MAP_CLIENT_SIDE);
|
2014-02-07 16:00:21 -08:00
|
|
|
wl_event_queue_init(&display->default_queue, display);
|
2014-02-07 16:50:50 -08:00
|
|
|
wl_event_queue_init(&display->display_queue, display);
|
2012-10-11 23:37:51 +02:00
|
|
|
wl_list_init(&display->event_queue_list);
|
2012-10-11 17:11:54 -04:00
|
|
|
pthread_mutex_init(&display->mutex, NULL);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_cond_init(&display->reader_cond, NULL);
|
|
|
|
|
display->reader_count = 0;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_new(&display->objects, 0, NULL);
|
2011-08-19 22:50:53 -04:00
|
|
|
|
2010-12-01 17:07:41 -05:00
|
|
|
display->proxy.object.interface = &wl_display_interface;
|
2011-11-18 21:59:36 -05:00
|
|
|
display->proxy.object.id =
|
2013-06-01 17:40:53 -05:00
|
|
|
wl_map_insert_new(&display->objects, 0, display);
|
2008-12-21 21:50:23 -05:00
|
|
|
display->proxy.display = display;
|
2011-08-19 22:50:53 -04:00
|
|
|
display->proxy.object.implementation = (void(**)(void)) &display_listener;
|
2011-02-18 15:28:54 -05:00
|
|
|
display->proxy.user_data = display;
|
2014-02-07 16:00:21 -08:00
|
|
|
display->proxy.queue = &display->default_queue;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
display->proxy.flags = 0;
|
|
|
|
|
display->proxy.refcount = 1;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-04 16:54:22 -04:00
|
|
|
display->connection = wl_connection_create(display->fd);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
if (display->connection == NULL)
|
|
|
|
|
goto err_connection;
|
2011-04-14 10:38:44 -04:00
|
|
|
|
2008-10-08 13:32:07 -04:00
|
|
|
return display;
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
err_connection:
|
|
|
|
|
pthread_mutex_destroy(&display->mutex);
|
|
|
|
|
pthread_cond_destroy(&display->reader_cond);
|
|
|
|
|
wl_map_release(&display->objects);
|
|
|
|
|
close(display->fd);
|
|
|
|
|
free(display);
|
|
|
|
|
|
|
|
|
|
return NULL;
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Connect to a Wayland display
|
|
|
|
|
*
|
|
|
|
|
* \param name Name of the Wayland display to connect to
|
|
|
|
|
* \return A \ref wl_display object or \c NULL on failure
|
|
|
|
|
*
|
|
|
|
|
* Connect to the Wayland display named \c name. If \c name is \c NULL,
|
2013-08-09 01:47:06 +00:00
|
|
|
* its value will be replaced with the WAYLAND_DISPLAY environment
|
2012-10-12 17:28:57 +03:00
|
|
|
* variable if it is set, otherwise display "wayland-0" will be used.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-08-14 13:16:10 -04:00
|
|
|
WL_EXPORT struct wl_display *
|
|
|
|
|
wl_display_connect(const char *name)
|
|
|
|
|
{
|
|
|
|
|
char *connection, *end;
|
|
|
|
|
int flags, fd;
|
|
|
|
|
|
|
|
|
|
connection = getenv("WAYLAND_SOCKET");
|
|
|
|
|
if (connection) {
|
|
|
|
|
fd = strtol(connection, &end, 0);
|
|
|
|
|
if (*end != '\0')
|
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
|
|
flags = fcntl(fd, F_GETFD);
|
|
|
|
|
if (flags != -1)
|
|
|
|
|
fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
|
|
|
|
|
unsetenv("WAYLAND_SOCKET");
|
|
|
|
|
} else {
|
|
|
|
|
fd = connect_to_socket(name);
|
|
|
|
|
if (fd < 0)
|
|
|
|
|
return NULL;
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-15 17:50:36 -04:00
|
|
|
return wl_display_connect_to_fd(fd);
|
2012-08-14 13:16:10 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Close a connection to a Wayland display
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
*
|
|
|
|
|
* Close the connection to \c display and free all resources associated
|
|
|
|
|
* with it.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2008-11-08 15:39:41 -05:00
|
|
|
WL_EXPORT void
|
2012-02-27 17:10:03 +01:00
|
|
|
wl_display_disconnect(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-10-08 13:32:07 -04:00
|
|
|
wl_connection_destroy(display->connection);
|
2011-08-19 22:50:53 -04:00
|
|
|
wl_map_release(&display->objects);
|
2014-02-07 16:00:21 -08:00
|
|
|
wl_event_queue_release(&display->default_queue);
|
2012-10-11 17:11:54 -04:00
|
|
|
pthread_mutex_destroy(&display->mutex);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_cond_destroy(&display->reader_cond);
|
2013-07-09 18:59:11 -04:00
|
|
|
close(display->fd);
|
2012-08-14 13:16:10 -04:00
|
|
|
|
2008-10-08 13:32:07 -04:00
|
|
|
free(display);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get a display context's file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return Display object file descriptor
|
|
|
|
|
*
|
|
|
|
|
* Return the file descriptor associated with a display so it can be
|
|
|
|
|
* integrated into the client's main loop.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2008-11-08 15:39:41 -05:00
|
|
|
WL_EXPORT int
|
2012-10-04 16:54:22 -04:00
|
|
|
wl_display_get_fd(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2008-10-08 13:32:07 -04:00
|
|
|
return display->fd;
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2011-07-29 19:51:22 -07:00
|
|
|
static void
|
Switch protocol to using serial numbers for ordering events and requests
The wayland protocol, as X, uses timestamps to match up certain
requests with input events. The problem is that sometimes we need to
send out an event that doesn't have a corresponding timestamped input
event. For example, the pointer focus surface goes away and new
surface needs to receive a pointer enter event. These events are
normally timestamped with the evdev event timestamp, but in this case,
we don't have a evdev timestamp. So we have to go to gettimeofday (or
clock_gettime()) and then we don't know if it's coming from the same
time source etc.
However for all these cases we don't need a real time timestamp, we
just need a serial number that encodes the order of events inside the
server. So we introduce a serial number mechanism that we can use to
order events. We still need real-time timestamps for actual input
device events (motion, buttons, keys, touch), to be able to reason
about double-click speed and movement speed so events that correspond to user input carry both a serial number and a timestamp.
The serial number also give us a mechanism to key together events that
are "logically the same" such as a unicode event and a keycode event,
or a motion event and a relative event from a raw device.
2012-04-11 22:25:51 -04:00
|
|
|
sync_callback(void *data, struct wl_callback *callback, uint32_t serial)
|
2010-09-03 14:46:38 -04:00
|
|
|
{
|
2011-07-29 19:51:22 -07:00
|
|
|
int *done = data;
|
2010-09-03 14:46:38 -04:00
|
|
|
|
2011-07-29 19:51:22 -07:00
|
|
|
*done = 1;
|
|
|
|
|
wl_callback_destroy(callback);
|
2010-09-03 14:46:38 -04:00
|
|
|
}
|
|
|
|
|
|
2011-07-29 19:51:22 -07:00
|
|
|
static const struct wl_callback_listener sync_listener = {
|
|
|
|
|
sync_callback
|
|
|
|
|
};
|
2010-09-03 14:46:38 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Block until all pending request are processed by the server
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* Blocks until the server process all currently issued requests and
|
|
|
|
|
* sends out pending events on all event queues.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 23:37:53 +02:00
|
|
|
WL_EXPORT int
|
2011-07-29 19:51:22 -07:00
|
|
|
wl_display_roundtrip(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
struct wl_callback *callback;
|
2012-10-11 23:37:53 +02:00
|
|
|
int done, ret = 0;
|
2011-07-29 19:51:22 -07:00
|
|
|
|
|
|
|
|
done = 0;
|
|
|
|
|
callback = wl_display_sync(display);
|
2013-07-13 00:42:14 -04:00
|
|
|
if (callback == NULL)
|
|
|
|
|
return -1;
|
2011-07-29 19:51:22 -07:00
|
|
|
wl_callback_add_listener(callback, &sync_listener, &done);
|
2012-11-26 23:25:53 +01:00
|
|
|
while (!done && ret >= 0)
|
2012-10-11 23:37:53 +02:00
|
|
|
ret = wl_display_dispatch(display);
|
|
|
|
|
|
2012-11-26 23:25:53 +01:00
|
|
|
if (ret == -1 && !done)
|
|
|
|
|
wl_callback_destroy(callback);
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
return ret;
|
2010-09-03 14:46:38 -04:00
|
|
|
}
|
|
|
|
|
|
2012-06-28 22:01:58 -04:00
|
|
|
static int
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
create_proxies(struct wl_proxy *sender, struct wl_closure *closure)
|
2012-06-28 22:01:58 -04:00
|
|
|
{
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
const char *signature;
|
2012-07-23 19:54:41 +01:00
|
|
|
struct argument_details arg;
|
2012-06-28 22:01:58 -04:00
|
|
|
uint32_t id;
|
|
|
|
|
int i;
|
2012-07-23 19:54:41 +01:00
|
|
|
int count;
|
2012-06-28 22:01:58 -04:00
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
2013-02-26 11:30:51 -05:00
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
2012-07-23 19:54:41 +01:00
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
2012-06-28 22:01:58 -04:00
|
|
|
case 'n':
|
2013-02-26 11:30:51 -05:00
|
|
|
id = closure->args[i].n;
|
2012-07-23 19:54:41 +01:00
|
|
|
if (id == 0) {
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->args[i].o = NULL;
|
2012-07-23 19:54:41 +01:00
|
|
|
break;
|
|
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
proxy = wl_proxy_create_for_id(sender, id,
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->message->types[i]);
|
2012-06-28 22:01:58 -04:00
|
|
|
if (proxy == NULL)
|
|
|
|
|
return -1;
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->args[i].o = (struct wl_object *)proxy;
|
2012-07-23 19:54:41 +01:00
|
|
|
break;
|
|
|
|
|
default:
|
2012-06-28 22:01:58 -04:00
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
static void
|
|
|
|
|
increase_closure_args_refcount(struct wl_closure *closure)
|
|
|
|
|
{
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
int i, count;
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
2013-02-26 11:30:51 -05:00
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
case 'o':
|
2013-02-26 11:30:51 -05:00
|
|
|
proxy = (struct wl_proxy *) closure->args[i].o;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (proxy)
|
|
|
|
|
proxy->refcount++;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
static int
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
queue_event(struct wl_display *display, int len)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
2012-10-04 17:34:18 -04:00
|
|
|
uint32_t p[2], id;
|
|
|
|
|
int opcode, size;
|
2008-12-30 11:03:33 -05:00
|
|
|
struct wl_proxy *proxy;
|
2012-06-12 17:45:25 -04:00
|
|
|
struct wl_closure *closure;
|
2010-09-01 17:18:33 -04:00
|
|
|
const struct wl_message *message;
|
2014-02-07 16:50:50 -08:00
|
|
|
struct wl_event_queue *queue;
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
wl_connection_copy(display->connection, p, sizeof p);
|
|
|
|
|
id = p[0];
|
|
|
|
|
opcode = p[1] & 0xffff;
|
|
|
|
|
size = p[1] >> 16;
|
|
|
|
|
if (len < size)
|
|
|
|
|
return 0;
|
2008-12-30 11:03:33 -05:00
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
proxy = wl_map_lookup(&display->objects, id);
|
2011-11-15 22:20:28 -05:00
|
|
|
if (proxy == WL_ZOMBIE_OBJECT) {
|
|
|
|
|
wl_connection_consume(display->connection, size);
|
2012-10-04 17:34:18 -04:00
|
|
|
return size;
|
2012-10-11 17:12:50 -04:00
|
|
|
} else if (proxy == NULL) {
|
2010-08-30 09:47:36 -04:00
|
|
|
wl_connection_consume(display->connection, size);
|
2012-10-04 17:34:18 -04:00
|
|
|
return size;
|
2010-08-30 09:47:36 -04:00
|
|
|
}
|
|
|
|
|
|
2010-12-01 17:07:41 -05:00
|
|
|
message = &proxy->object.interface->events[opcode];
|
2012-06-12 17:45:25 -04:00
|
|
|
closure = wl_connection_demarshal(display->connection, size,
|
|
|
|
|
&display->objects, message);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (!closure)
|
|
|
|
|
return -1;
|
2010-08-30 09:47:36 -04:00
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
if (create_proxies(proxy, closure) < 0) {
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return -1;
|
2011-07-18 02:00:24 -04:00
|
|
|
}
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (wl_closure_lookup_objects(closure, &display->objects) != 0) {
|
|
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
increase_closure_args_refcount(closure);
|
|
|
|
|
proxy->refcount++;
|
|
|
|
|
closure->proxy = proxy;
|
|
|
|
|
|
2014-02-07 16:50:50 -08:00
|
|
|
if (proxy == &display->proxy)
|
|
|
|
|
queue = &display->display_queue;
|
|
|
|
|
else
|
|
|
|
|
queue = proxy->queue;
|
|
|
|
|
|
|
|
|
|
if (wl_list_empty(&queue->event_list))
|
|
|
|
|
pthread_cond_signal(&queue->cond);
|
|
|
|
|
wl_list_insert(queue->event_list.prev, &closure->link);
|
2012-10-04 17:34:18 -04:00
|
|
|
|
|
|
|
|
return size;
|
|
|
|
|
}
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
static void
|
|
|
|
|
decrease_closure_args_refcount(struct wl_closure *closure)
|
|
|
|
|
{
|
|
|
|
|
const char *signature;
|
|
|
|
|
struct argument_details arg;
|
|
|
|
|
int i, count;
|
|
|
|
|
struct wl_proxy *proxy;
|
|
|
|
|
|
|
|
|
|
signature = closure->message->signature;
|
2013-02-26 11:30:51 -05:00
|
|
|
count = arg_count_for_signature(signature);
|
|
|
|
|
for (i = 0; i < count; i++) {
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
signature = get_next_argument(signature, &arg);
|
|
|
|
|
switch (arg.type) {
|
|
|
|
|
case 'n':
|
|
|
|
|
case 'o':
|
2013-02-26 11:30:51 -05:00
|
|
|
proxy = (struct wl_proxy *) closure->args[i].o;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
if (proxy) {
|
|
|
|
|
if (proxy->flags & WL_PROXY_FLAG_DESTROYED)
|
2013-02-26 11:30:51 -05:00
|
|
|
closure->args[i].o = NULL;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-04 17:34:18 -04:00
|
|
|
static void
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
dispatch_event(struct wl_display *display, struct wl_event_queue *queue)
|
2012-10-04 17:34:18 -04:00
|
|
|
{
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
struct wl_closure *closure;
|
2012-10-04 17:34:18 -04:00
|
|
|
struct wl_proxy *proxy;
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
int opcode;
|
|
|
|
|
bool proxy_destroyed;
|
2012-10-04 17:34:18 -04:00
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
closure = container_of(queue->event_list.next,
|
|
|
|
|
struct wl_closure, link);
|
2012-10-04 17:34:18 -04:00
|
|
|
wl_list_remove(&closure->link);
|
2013-02-26 11:30:51 -05:00
|
|
|
opcode = closure->opcode;
|
2012-10-04 17:34:18 -04:00
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
/* Verify that the receiving object is still valid by checking if has
|
|
|
|
|
* been destroyed by the application. */
|
|
|
|
|
|
|
|
|
|
decrease_closure_args_refcount(closure);
|
|
|
|
|
proxy = closure->proxy;
|
|
|
|
|
proxy_destroyed = !!(proxy->flags & WL_PROXY_FLAG_DESTROYED);
|
|
|
|
|
|
|
|
|
|
proxy->refcount--;
|
|
|
|
|
if (proxy_destroyed) {
|
2013-03-07 23:32:39 +01:00
|
|
|
if (!proxy->refcount)
|
|
|
|
|
free(proxy);
|
|
|
|
|
|
client: Keep track of proxy validity and number of reference holders
When events are queued, the associated proxy objects (target proxy and
potentially closure argument proxies) are verified being valid. However,
as any event may destroy some proxy object, validity needs to be
verified again before dispatching. Before this change this was done by
again looking up the object via the display object map, but that did not
work because a delete_id event could be dispatched out-of-order if it
was queued in another queue, causing the object map to either have a new
proxy object with the same id or none at all, had it been destroyed in
an earlier event in the queue.
Instead, make wl_proxy reference counted and increase the reference
counter of every object associated with an event when it is queued. In
wl_proxy_destroy() set a flag saying the proxy has been destroyed by the
application and only free the proxy if the reference counter reaches
zero after decreasing it.
Before dispatching, verify that a proxy object still is valid by
checking that the flag set in wl_proxy_destroy() has not been set. When
dequeuing the event, all associated proxy objects are dereferenced and
free:ed if the reference counter reaches zero. As proxy reference counter
is initiated to 1, when dispatching an event it can never reach zero
without having the destroyed flag set.
Signed-off-by: Jonas Ådahl <jadahl@gmail.com>
2012-11-03 22:26:10 +01:00
|
|
|
wl_closure_destroy(closure);
|
|
|
|
|
return;
|
|
|
|
|
}
|
2012-10-04 17:42:49 -04:00
|
|
|
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
pthread_mutex_unlock(&display->mutex);
|
2012-10-04 17:34:18 -04:00
|
|
|
|
2013-07-17 21:58:47 -05:00
|
|
|
if (proxy->dispatcher) {
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
2013-07-17 21:58:47 -05:00
|
|
|
wl_closure_print(closure, &proxy->object, false);
|
|
|
|
|
|
|
|
|
|
wl_closure_dispatch(closure, proxy->dispatcher,
|
|
|
|
|
&proxy->object, opcode);
|
|
|
|
|
} else if (proxy->object.implementation) {
|
2013-12-18 20:56:18 -06:00
|
|
|
if (debug_client)
|
2012-10-09 12:14:34 -04:00
|
|
|
wl_closure_print(closure, &proxy->object, false);
|
|
|
|
|
|
2013-03-08 22:26:12 -06:00
|
|
|
wl_closure_invoke(closure, WL_CLOSURE_INVOKE_CLIENT,
|
2013-07-17 21:58:47 -05:00
|
|
|
&proxy->object, opcode, proxy->user_data);
|
2012-10-09 12:14:34 -04:00
|
|
|
}
|
|
|
|
|
|
2012-06-12 17:45:25 -04:00
|
|
|
wl_closure_destroy(closure);
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-11 17:15:08 -04:00
|
|
|
static int
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
read_events(struct wl_display *display)
|
2008-10-07 10:10:36 -04:00
|
|
|
{
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
int total, rem, size;
|
|
|
|
|
uint32_t serial;
|
|
|
|
|
|
|
|
|
|
display->reader_count--;
|
|
|
|
|
if (display->reader_count == 0) {
|
|
|
|
|
total = wl_connection_read(display->connection);
|
|
|
|
|
if (total == -1) {
|
2013-07-09 17:55:45 -04:00
|
|
|
if (errno == EAGAIN)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
display_fatal_error(display, errno);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2013-07-09 14:10:45 +01:00
|
|
|
} else if (total == 0) {
|
|
|
|
|
/* The compositor has closed the socket. This
|
|
|
|
|
* should be considered an error so we'll fake
|
|
|
|
|
* an errno */
|
|
|
|
|
errno = EPIPE;
|
|
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
return -1;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
2013-07-09 14:10:45 +01:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
for (rem = total; rem >= 8; rem -= size) {
|
|
|
|
|
size = queue_event(display, rem);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (size == -1) {
|
|
|
|
|
display_fatal_error(display, errno);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2012-10-11 23:37:53 +02:00
|
|
|
} else if (size == 0) {
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
break;
|
2012-10-11 23:37:53 +02:00
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
display->read_serial++;
|
|
|
|
|
pthread_cond_broadcast(&display->reader_cond);
|
|
|
|
|
} else {
|
|
|
|
|
serial = display->read_serial;
|
|
|
|
|
while (display->read_serial == serial)
|
|
|
|
|
pthread_cond_wait(&display->reader_cond,
|
|
|
|
|
&display->mutex);
|
2012-10-04 17:42:49 -04:00
|
|
|
}
|
2011-10-20 15:05:11 -04:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Read events from display file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return 0 on success or -1 on error. In case of error errno will
|
|
|
|
|
* be set accordingly
|
|
|
|
|
*
|
|
|
|
|
* This will read events from the file descriptor for the display.
|
|
|
|
|
* This function does not dispatch events, it only reads and queues
|
|
|
|
|
* events into their corresponding event queues. If no data is
|
|
|
|
|
* avilable on the file descriptor, wl_display_read_events() returns
|
|
|
|
|
* immediately. To dispatch events that may have been queued, call
|
|
|
|
|
* wl_display_dispatch_pending() or
|
|
|
|
|
* wl_display_dispatch_queue_pending().
|
|
|
|
|
*
|
|
|
|
|
* Before calling this function, wl_display_prepare_read() must be
|
|
|
|
|
* called first.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_read_events(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = read_events(display);
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
|
dispatch_queue(struct wl_display *display, struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
int count;
|
|
|
|
|
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
goto err;
|
|
|
|
|
|
2014-02-07 16:50:50 -08:00
|
|
|
count = 0;
|
|
|
|
|
while (!wl_list_empty(&display->display_queue.event_list)) {
|
|
|
|
|
dispatch_event(display, &display->display_queue);
|
|
|
|
|
if (display->last_error)
|
|
|
|
|
goto err;
|
|
|
|
|
count++;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
while (!wl_list_empty(&queue->event_list)) {
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
dispatch_event(display, queue);
|
2012-10-11 23:37:53 +02:00
|
|
|
if (display->last_error)
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
goto err;
|
2014-02-07 16:50:50 -08:00
|
|
|
count++;
|
2012-10-11 23:37:53 +02:00
|
|
|
}
|
2008-10-07 10:10:36 -04:00
|
|
|
|
2012-10-15 11:38:20 -04:00
|
|
|
return count;
|
2012-10-11 23:37:53 +02:00
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
err:
|
2012-10-11 23:37:53 +02:00
|
|
|
errno = display->last_error;
|
|
|
|
|
|
|
|
|
|
return -1;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_prepare_read_queue(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
if (!wl_list_empty(&queue->event_list)) {
|
|
|
|
|
errno = EAGAIN;
|
|
|
|
|
ret = -1;
|
|
|
|
|
} else {
|
|
|
|
|
display->reader_count++;
|
|
|
|
|
ret = 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Prepare to read events after polling file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return 0 on success or -1 if event queue was not empty
|
|
|
|
|
*
|
|
|
|
|
* This function must be called before reading from the file
|
|
|
|
|
* descriptor using wl_display_read_events(). Calling
|
|
|
|
|
* wl_display_prepare_read() announces the calling threads intention
|
|
|
|
|
* to read and ensures that until the thread is ready to read and
|
|
|
|
|
* calls wl_display_read_events(), no other thread will read from the
|
|
|
|
|
* file descriptor. This only succeeds if the event queue is empty
|
|
|
|
|
* though, and if there are undispatched events in the queue, -1 is
|
2013-09-18 10:45:06 +00:00
|
|
|
* returned and errno set to EAGAIN.
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
*
|
|
|
|
|
* If a thread successfully calls wl_display_prepare_read(), it must
|
|
|
|
|
* either call wl_display_read_events() when it's ready or cancel the
|
|
|
|
|
* read intention by calling wl_display_cancel_read().
|
|
|
|
|
*
|
|
|
|
|
* Use this function before polling on the display fd or to integrate
|
|
|
|
|
* the fd into a toolkit event loop in a race-free way. Typically, a
|
|
|
|
|
* toolkit will call wl_display_dispatch_pending() before sleeping, to
|
|
|
|
|
* make sure it doesn't block with unhandled events. Upon waking up,
|
|
|
|
|
* it will assume the file descriptor is readable and read events from
|
|
|
|
|
* the fd by calling wl_display_dispatch(). Simplified, we have:
|
|
|
|
|
*
|
|
|
|
|
* wl_display_dispatch_pending(display);
|
|
|
|
|
* wl_display_flush(display);
|
|
|
|
|
* poll(fds, nfds, -1);
|
|
|
|
|
* wl_display_dispatch(display);
|
|
|
|
|
*
|
|
|
|
|
* There are two races here: first, before blocking in poll(), the fd
|
|
|
|
|
* could become readable and another thread reads the events. Some of
|
|
|
|
|
* these events may be for the main queue and the other thread will
|
|
|
|
|
* queue them there and then the main thread will go to sleep in
|
|
|
|
|
* poll(). This will stall the application, which could be waiting
|
|
|
|
|
* for a event to kick of the next animation frame, for example.
|
|
|
|
|
*
|
|
|
|
|
* The other race is immediately after poll(), where another thread
|
|
|
|
|
* could preempt and read events before the main thread calls
|
|
|
|
|
* wl_display_dispatch(). This call now blocks and starves the other
|
|
|
|
|
* fds in the event loop.
|
|
|
|
|
*
|
|
|
|
|
* A correct sequence would be:
|
|
|
|
|
*
|
|
|
|
|
* while (wl_display_prepare_read(display) != 0)
|
|
|
|
|
* wl_display_dispatch_pending(display);
|
|
|
|
|
* wl_display_flush(display);
|
|
|
|
|
* poll(fds, nfds, -1);
|
|
|
|
|
* wl_display_read_events(display);
|
|
|
|
|
* wl_display_dispatch_pending(display);
|
|
|
|
|
*
|
|
|
|
|
* Here we call wl_display_prepare_read(), which ensures that between
|
|
|
|
|
* returning from that call and eventually calling
|
|
|
|
|
* wl_display_read_events(), no other thread will read from the fd and
|
|
|
|
|
* queue events in our queue. If the call to
|
|
|
|
|
* wl_display_prepare_read() fails, we dispatch the pending events and
|
|
|
|
|
* try again until we're successful.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_prepare_read(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_prepare_read_queue(display, &display->default_queue);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** Release exclusive access to display file descriptor
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
*
|
|
|
|
|
* This releases the exclusive access. Useful for canceling the lock
|
|
|
|
|
* when a timed out poll returns fd not readable and we're not going
|
|
|
|
|
* to read from the fd anytime soon.
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_display_cancel_read(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
display->reader_count--;
|
|
|
|
|
if (display->reader_count == 0) {
|
|
|
|
|
display->read_serial++;
|
|
|
|
|
pthread_cond_broadcast(&display->reader_cond);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Dispatch events in an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \param queue The event queue to dispatch
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* Dispatch all incoming events for objects assigned to the given
|
|
|
|
|
* event queue. On failure -1 is returned and errno set appropriately.
|
|
|
|
|
*
|
2012-10-19 15:30:25 +03:00
|
|
|
* This function blocks if there are no events to dispatch. If calling from
|
|
|
|
|
* the main thread, it will block reading data from the display fd. For other
|
|
|
|
|
* threads this will block until the main thread queues events on the queue
|
|
|
|
|
* passed as argument.
|
|
|
|
|
*
|
2012-10-12 17:28:57 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 17:15:08 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_queue(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
struct pollfd pfd[2];
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = dispatch_queue(display, queue);
|
|
|
|
|
if (ret == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
if (ret > 0) {
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2014-03-26 13:28:27 -07:00
|
|
|
/* We ignore EPIPE here, so that we try to read events before
|
|
|
|
|
* returning an error. When the compositor sends an error it
|
|
|
|
|
* will close the socket, and if we bail out here we don't get
|
|
|
|
|
* a chance to process the error. */
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
ret = wl_connection_flush(display->connection);
|
2014-03-26 13:28:27 -07:00
|
|
|
if (ret < 0 && errno != EAGAIN && errno != EPIPE) {
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
display->reader_count++;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
pfd[0].fd = display->fd;
|
|
|
|
|
pfd[0].events = POLLIN;
|
2013-12-09 15:49:48 -08:00
|
|
|
do {
|
|
|
|
|
ret = poll(pfd, 1, -1);
|
|
|
|
|
} while (ret == -1 && errno == EINTR);
|
|
|
|
|
|
|
|
|
|
if (ret == -1) {
|
2013-09-25 10:39:12 +01:00
|
|
|
wl_display_cancel_read(display);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
return -1;
|
2013-09-25 10:39:12 +01:00
|
|
|
}
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
if (read_events(display) == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
|
|
|
|
|
ret = dispatch_queue(display, queue);
|
|
|
|
|
if (ret == -1)
|
|
|
|
|
goto err_unlock;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
err_unlock:
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
return -1;
|
2012-10-11 17:15:08 -04:00
|
|
|
}
|
|
|
|
|
|
2012-11-22 15:34:48 +02:00
|
|
|
/** Dispatch pending events in an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \param queue The event queue to dispatch
|
|
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* Dispatch all incoming events for objects assigned to the given
|
|
|
|
|
* event queue. On failure -1 is returned and errno set appropriately.
|
2013-08-09 01:47:06 +00:00
|
|
|
* If there are no events queued, this function returns immediately.
|
2012-11-22 15:34:48 +02:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
2012-11-30 14:05:32 -05:00
|
|
|
* \since 1.0.2
|
2012-11-22 15:34:48 +02:00
|
|
|
*/
|
|
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_queue_pending(struct wl_display *display,
|
|
|
|
|
struct wl_event_queue *queue)
|
|
|
|
|
{
|
2013-07-29 16:50:44 -07:00
|
|
|
int ret;
|
|
|
|
|
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2013-07-29 16:50:44 -07:00
|
|
|
ret = dispatch_queue(display, queue);
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
2013-07-29 16:50:44 -07:00
|
|
|
return ret;
|
2012-11-22 15:34:48 +02:00
|
|
|
}
|
|
|
|
|
|
2012-10-19 15:30:25 +03:00
|
|
|
/** Process incoming events
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2012-10-16 17:29:08 +03:00
|
|
|
* \return The number of dispatched events on success or -1 on failure
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* Dispatch the display's main event queue.
|
|
|
|
|
*
|
2012-10-19 15:30:25 +03:00
|
|
|
* If the main event queue is empty, this function blocks until there are
|
|
|
|
|
* events to be read from the display fd. Events are read and queued on
|
|
|
|
|
* the appropriate event queues. Finally, events on the main event queue
|
|
|
|
|
* are dispatched.
|
|
|
|
|
*
|
|
|
|
|
* \note It is not possible to check if there are events on the main queue
|
|
|
|
|
* or not. For dispatching main queue events without blocking, see \ref
|
|
|
|
|
* wl_display_dispatch_pending().
|
|
|
|
|
*
|
client: Add wl_display_prepare_read() API to relax thread model assumptions
The current thread model assumes that the application or toolkit will have
one thread that either polls the display fd and dispatches events or just
dispatches in a loop. Only this main thread will read from the fd while
all other threads will block on a pthread condition and expect the main
thread to deliver events to them.
This turns out to be too restrictive. We can't assume that there
always will be a thread like that. Qt QML threaded rendering will
block the main thread on a condition that's signaled by a rendering
thread after it finishes rendering. This leads to a deadlock when the
rendering threads blocks in eglSwapBuffers(), and the main thread is
waiting on the condition. Another problematic use case is with games
that has a rendering thread for a splash screen while the main thread
is busy loading game data or compiling shaders. The main thread isn't
responsive and ends up blocking eglSwapBuffers() in the rendering thread.
We also can't assume that there will be only one thread polling on the
file descriptor. A valid use case is a thread receiving data from a
custom wayland interface as well as a device fd or network socket.
The thread may want to wait on either events from the wayland
interface or data from the fd, in which case it needs to poll on both
the wayland display fd and the device/network fd.
The solution seems pretty straightforward: just let all threads read
from the fd. However, the main-thread restriction was introduced to
avoid a race. Simplified, main loops will do something like this:
wl_display_dispatch_pending(display);
/* Race here if other thread reads from fd and places events
* in main eent queue. We go to sleep in poll while sitting on
* events that may stall the application if not dispatched. */
poll(fds, nfds, -1);
/* Race here if other thread reads and doesn't queue any
* events for main queue. wl_display_dispatch() below will block
* trying to read from the fd, while other fds in the mainloop
* are ignored. */
wl_display_dispatch(display);
The restriction that only the main thread can read from the fd avoids
these races, but has the problems described above.
This patch introduces new API to solve both problems. We add
int wl_display_prepare_read(struct wl_display *display);
and
int wl_display_read_events(struct wl_display *display);
wl_display_prepare_read() registers the calling thread as a potential
reader of events. Once data is available on the fd, all reader
threads must call wl_display_read_events(), at which point one of the
threads will read from the fd and distribute the events to event
queues. When that is done, all threads return from
wl_display_read_events().
From the point of view of a single thread, this ensures that between
calling wl_display_prepare_read() and wl_display_read_events(), no
other thread will read from the fd and queue events in its event
queue. This avoids the race conditions described above, and we avoid
relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
|
|
|
* \note Calling this will release the display file descriptor if this
|
|
|
|
|
* thread acquired it using wl_display_acquire_fd().
|
2012-10-19 15:30:25 +03:00
|
|
|
*
|
|
|
|
|
* \sa wl_display_dispatch_pending(), wl_display_dispatch_queue()
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_dispatch_queue(display, &display->default_queue);
|
2012-10-11 17:15:08 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-19 15:30:25 +03:00
|
|
|
/** Dispatch main queue events without reading from the display fd
|
|
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
|
|
|
|
* \return The number of dispatched events or -1 on failure
|
|
|
|
|
*
|
|
|
|
|
* This function dispatches events on the main event queue. It does not
|
|
|
|
|
* attempt to read the display fd and simply returns zero if the main
|
|
|
|
|
* queue is empty, i.e., it doesn't block.
|
|
|
|
|
*
|
|
|
|
|
* This is necessary when a client's main loop wakes up on some fd other
|
|
|
|
|
* than the display fd (network socket, timer fd, etc) and calls \ref
|
|
|
|
|
* wl_display_dispatch_queue() from that callback. This may queue up
|
|
|
|
|
* events in the main queue while reading all data from the display fd.
|
|
|
|
|
* When the main thread returns to the main loop to block, the display fd
|
|
|
|
|
* no longer has data, causing a call to \em poll(2) (or similar
|
|
|
|
|
* functions) to block indefinitely, even though there are events ready
|
|
|
|
|
* to dispatch.
|
|
|
|
|
*
|
|
|
|
|
* To proper integrate the wayland display fd into a main loop, the
|
|
|
|
|
* client should always call \ref wl_display_dispatch_pending() and then
|
|
|
|
|
* \ref wl_display_flush() prior to going back to sleep. At that point,
|
|
|
|
|
* the fd typically doesn't have data so attempting I/O could block, but
|
|
|
|
|
* events queued up on the main queue should be dispatched.
|
|
|
|
|
*
|
|
|
|
|
* A real-world example is a main loop that wakes up on a timerfd (or a
|
|
|
|
|
* sound card fd becoming writable, for example in a video player), which
|
|
|
|
|
* then triggers GL rendering and eventually eglSwapBuffers().
|
|
|
|
|
* eglSwapBuffers() may call wl_display_dispatch_queue() if it didn't
|
|
|
|
|
* receive the frame event for the previous frame, and as such queue
|
|
|
|
|
* events in the main queue.
|
|
|
|
|
*
|
|
|
|
|
* \note Calling this makes the current thread the main one.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_dispatch(), wl_display_dispatch_queue(),
|
|
|
|
|
* wl_display_flush()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 17:15:08 -04:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_dispatch_pending(struct wl_display *display)
|
|
|
|
|
{
|
2014-02-07 16:00:21 -08:00
|
|
|
return wl_display_dispatch_queue_pending(display,
|
|
|
|
|
&display->default_queue);
|
2008-10-07 10:10:36 -04:00
|
|
|
}
|
|
|
|
|
|
2013-08-09 01:47:06 +00:00
|
|
|
/** Retrieve the last error that occurred on a display
|
2012-10-16 17:29:09 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2013-08-09 01:47:06 +00:00
|
|
|
* \return The last error that occurred on \c display or 0 if no error occurred
|
2012-10-17 21:26:09 +03:00
|
|
|
*
|
2013-08-09 01:47:06 +00:00
|
|
|
* Return the last error that occurred on the display. This may be an error sent
|
2012-10-17 21:26:09 +03:00
|
|
|
* by the server or caused by the local client.
|
|
|
|
|
*
|
|
|
|
|
* \note Errors are \b fatal. If this function returns non-zero the display
|
|
|
|
|
* can no longer be used.
|
2012-10-16 17:29:09 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-11 23:37:42 +02:00
|
|
|
WL_EXPORT int
|
|
|
|
|
wl_display_get_error(struct wl_display *display)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
ret = display->last_error;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2013-04-02 21:31:02 -04:00
|
|
|
/** Send all buffered requests on the display to the server
|
2012-10-16 17:29:06 +03:00
|
|
|
*
|
|
|
|
|
* \param display The display context object
|
2013-08-09 01:47:06 +00:00
|
|
|
* \return The number of bytes sent on success or -1 on failure
|
2012-10-16 17:29:06 +03:00
|
|
|
*
|
|
|
|
|
* Send all buffered data on the client side to the server. Clients
|
|
|
|
|
* should call this function before blocking. On success, the number
|
|
|
|
|
* of bytes sent to the server is returned. On failure, this
|
|
|
|
|
* function returns -1 and errno is set appropriately.
|
|
|
|
|
*
|
2013-04-02 21:31:02 -04:00
|
|
|
* wl_display_flush() never blocks. It will write as much data as
|
|
|
|
|
* possible, but if all data could not be written, errno will be set
|
|
|
|
|
* to EAGAIN and -1 returned. In that case, use poll on the display
|
|
|
|
|
* file descriptor to wait for it to become writable again.
|
|
|
|
|
*
|
2012-10-16 17:29:06 +03:00
|
|
|
* \memberof wl_display
|
|
|
|
|
*/
|
2012-10-04 16:54:22 -04:00
|
|
|
WL_EXPORT int
|
2011-05-10 17:51:52 +01:00
|
|
|
wl_display_flush(struct wl_display *display)
|
|
|
|
|
{
|
2012-10-04 17:42:49 -04:00
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
pthread_mutex_lock(&display->mutex);
|
|
|
|
|
|
2012-10-11 23:37:53 +02:00
|
|
|
if (display->last_error) {
|
|
|
|
|
errno = display->last_error;
|
|
|
|
|
ret = -1;
|
|
|
|
|
} else {
|
|
|
|
|
ret = wl_connection_flush(display->connection);
|
|
|
|
|
if (ret < 0 && errno != EAGAIN)
|
|
|
|
|
display_fatal_error(display, errno);
|
|
|
|
|
}
|
2012-10-04 17:42:49 -04:00
|
|
|
|
|
|
|
|
pthread_mutex_unlock(&display->mutex);
|
|
|
|
|
|
|
|
|
|
return ret;
|
2011-05-10 17:51:52 +01:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Set the user data associated with a proxy
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param user_data The data to be associated with proxy
|
|
|
|
|
*
|
2012-10-15 17:53:23 +03:00
|
|
|
* Set the user data associated with \c proxy. When events for this
|
|
|
|
|
* proxy are received, \c user_data will be supplied to its listener.
|
2012-10-12 17:28:57 +03:00
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2009-09-18 09:49:21 -04:00
|
|
|
WL_EXPORT void
|
2010-08-17 21:23:10 -04:00
|
|
|
wl_proxy_set_user_data(struct wl_proxy *proxy, void *user_data)
|
2009-09-18 09:49:21 -04:00
|
|
|
{
|
2010-08-10 14:02:48 -04:00
|
|
|
proxy->user_data = user_data;
|
2009-09-18 09:49:21 -04:00
|
|
|
}
|
|
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get the user data associated with a proxy
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The user data associated with proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2009-09-18 09:49:21 -04:00
|
|
|
WL_EXPORT void *
|
2010-08-17 21:23:10 -04:00
|
|
|
wl_proxy_get_user_data(struct wl_proxy *proxy)
|
2009-09-18 09:49:21 -04:00
|
|
|
{
|
2010-08-10 14:02:48 -04:00
|
|
|
return proxy->user_data;
|
2009-09-18 09:49:21 -04:00
|
|
|
}
|
2012-04-27 11:31:07 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Get the id of a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The id the object associated with the proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
2012-04-27 11:31:07 -04:00
|
|
|
WL_EXPORT uint32_t
|
|
|
|
|
wl_proxy_get_id(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.id;
|
|
|
|
|
}
|
2012-05-29 17:37:02 +02:00
|
|
|
|
2013-02-26 15:19:44 +02:00
|
|
|
/** Get the interface name (class) of a proxy object
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \return The interface name of the object associated with the proxy
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
|
|
|
|
WL_EXPORT const char *
|
|
|
|
|
wl_proxy_get_class(struct wl_proxy *proxy)
|
|
|
|
|
{
|
|
|
|
|
return proxy->object.interface->name;
|
|
|
|
|
}
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
|
2012-10-12 17:28:57 +03:00
|
|
|
/** Assign a proxy to an event queue
|
|
|
|
|
*
|
|
|
|
|
* \param proxy The proxy object
|
|
|
|
|
* \param queue The event queue that will handle this proxy
|
|
|
|
|
*
|
2012-10-15 17:53:23 +03:00
|
|
|
* Assign proxy to event queue. Events coming from \c proxy will be
|
2012-10-12 17:28:57 +03:00
|
|
|
* queued in \c queue instead of the display's main queue.
|
|
|
|
|
*
|
|
|
|
|
* \sa wl_display_dispatch_queue()
|
|
|
|
|
*
|
|
|
|
|
* \memberof wl_proxy
|
|
|
|
|
*/
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_proxy_set_queue(struct wl_proxy *proxy, struct wl_event_queue *queue)
|
|
|
|
|
{
|
2013-12-04 16:58:39 -08:00
|
|
|
if (queue)
|
|
|
|
|
proxy->queue = queue;
|
|
|
|
|
else
|
2014-02-07 16:00:21 -08:00
|
|
|
proxy->queue = &proxy->display->default_queue;
|
client: Add wl_event_queue for multi-thread dispatching
This introduces wl_event_queue, which is what will make multi-threaded
wayland clients possible and useful. The driving use case is that of a
GL rendering thread that renders and calls eglSwapBuffer independently of
a "main thread" that owns the wl_display and handles input events and
everything else. In general, the EGL and GL APIs have a threading model
that requires the wayland client library to be usable from several threads.
Finally, the current callback model gets into trouble even in a single
threaded scenario: if we have to block in eglSwapBuffers, we may end up
doing unrelated callbacks from within EGL.
The wl_event_queue mechanism lets the application (or middleware such as
EGL or toolkits) assign a proxy to an event queue. Only events from objects
associated with the queue will be put in the queue, and conversely,
events from objects associated with the queue will not be queue up anywhere
else. The wl_display struct has a built-in event queue, which is considered
the main and default event queue. New proxies are associated with the
same queue as the object that created them (either the object that a
request with a new-id argument was sent to or the object that sent an
event with a new-id argument). A proxy can be moved to a different event
queue by calling wl_proxy_set_queue().
A subsystem, such as EGL, will then create its own event queue and associate
the objects it expects to receive events from with that queue. If EGL
needs to block and wait for a certain event, it can keep dispatching event
from its queue until that events comes in. This wont call out to unrelated
code with an EGL lock held. Similarly, we don't risk the main thread
handling an event from an EGL object and then calling into EGL from a
different thread without the lock held.
2012-10-05 13:49:48 -04:00
|
|
|
}
|
|
|
|
|
|
2012-05-29 17:37:02 +02:00
|
|
|
WL_EXPORT void
|
|
|
|
|
wl_log_set_handler_client(wl_log_func_t handler)
|
|
|
|
|
{
|
|
|
|
|
wl_log_handler = handler;
|
|
|
|
|
}
|