wayland/src/wayland-client.c

1389 lines
35 KiB
C
Raw Normal View History

2008-12-02 15:15:01 -05:00
/*
* Copyright © 2008-2012 Kristian Høgsberg
* Copyright © 2010-2012 Intel Corporation
2008-12-02 15:15:01 -05:00
*
* Permission to use, copy, modify, distribute, and sell this software and its
* documentation for any purpose is hereby granted without fee, provided that
* the above copyright notice appear in all copies and that both that copyright
* notice and this permission notice appear in supporting documentation, and
* that the name of the copyright holders not be used in advertising or
* publicity pertaining to distribution of the software without specific,
* written prior permission. The copyright holders make no representations
* about the suitability of this software for any purpose. It is provided "as
* is" without express or implied warranty.
*
* THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
* INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO
* EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR
* CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
* DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
* TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
* OF THIS SOFTWARE.
*/
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdint.h>
#include <stddef.h>
#include <stdio.h>
#include <stdbool.h>
#include <errno.h>
#include <string.h>
#include <unistd.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <ctype.h>
#include <assert.h>
2011-04-11 09:24:11 -04:00
#include <fcntl.h>
#include <sys/poll.h>
#include <pthread.h>
#include "wayland-util.h"
#include "wayland-os.h"
#include "wayland-client.h"
#include "wayland-private.h"
/** \cond */
enum wl_proxy_flag {
WL_PROXY_FLAG_ID_DELETED = (1 << 0),
WL_PROXY_FLAG_DESTROYED = (1 << 1)
};
struct wl_proxy {
struct wl_object object;
struct wl_display *display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
struct wl_event_queue *queue;
uint32_t flags;
int refcount;
void *user_data;
};
struct wl_global {
uint32_t id;
char *interface;
uint32_t version;
struct wl_list link;
};
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
struct wl_event_queue {
struct wl_list link;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
struct wl_list event_list;
struct wl_display *display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
pthread_cond_t cond;
};
struct wl_display {
struct wl_proxy proxy;
struct wl_connection *connection;
int last_error;
int fd;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
pthread_t display_thread;
struct wl_map objects;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
struct wl_event_queue queue;
struct wl_list event_queue_list;
pthread_mutex_t mutex;
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
int reader_count;
uint32_t read_serial;
pthread_cond_t reader_cond;
};
/** \endcond */
static int wl_debug = 0;
static void
display_fatal_error(struct wl_display *display, int error)
{
struct wl_event_queue *iter;
if (display->last_error)
return;
if (!error)
error = 1;
display->last_error = error;
close(display->fd);
display->fd = -1;
wl_list_for_each(iter, &display->event_queue_list, link)
pthread_cond_broadcast(&iter->cond);
}
static void
wl_display_fatal_error(struct wl_display *display, int error)
{
pthread_mutex_lock(&display->mutex);
display_fatal_error(display, error);
pthread_mutex_unlock(&display->mutex);
}
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
static void
wl_event_queue_init(struct wl_event_queue *queue, struct wl_display *display)
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
{
wl_list_init(&queue->event_list);
pthread_cond_init(&queue->cond, NULL);
queue->display = display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
}
static void
wl_event_queue_release(struct wl_event_queue *queue)
{
struct wl_closure *closure;
while (!wl_list_empty(&queue->event_list)) {
closure = container_of(queue->event_list.next,
struct wl_closure, link);
wl_list_remove(&closure->link);
wl_closure_destroy(closure);
}
pthread_cond_destroy(&queue->cond);
}
/** Destroy an event queue
*
* \param queue The event queue to be destroyed
*
* Destroy the given event queue. Any pending event on that queue is
* discarded.
*
* The \ref wl_display object used to create the queue should not be
* destroyed until all event queues created with it are destroyed with
* this function.
*
* \memberof wl_event_queue
*/
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
WL_EXPORT void
wl_event_queue_destroy(struct wl_event_queue *queue)
{
struct wl_display *display = queue->display;
pthread_mutex_lock(&display->mutex);
wl_list_remove(&queue->link);
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
wl_event_queue_release(queue);
free(queue);
pthread_mutex_unlock(&display->mutex);
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
}
/** Create a new event queue for this display
*
* \param display The display context object
* \return A new event queue associated with this display or NULL on
* failure.
*
* \memberof wl_display
*/
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
WL_EXPORT struct wl_event_queue *
wl_display_create_queue(struct wl_display *display)
{
struct wl_event_queue *queue;
queue = malloc(sizeof *queue);
if (queue == NULL)
return NULL;
wl_event_queue_init(queue, display);
pthread_mutex_lock(&display->mutex);
wl_list_insert(&display->event_queue_list, &queue->link);
pthread_mutex_unlock(&display->mutex);
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
return queue;
}
/** Create a proxy object with a given interface
*
* \param factory Factory proxy object
* \param interface Interface the proxy object should use
* \return A newly allocated proxy object or NULL on failure
*
* This function creates a new proxy object with the supplied interface. The
* proxy object will have an id assigned from the client id space. The id
* should be created on the compositor side by sending an appropriate request
* with \ref wl_proxy_marshal().
*
* The proxy will inherit the display and event queue of the factory object.
*
* \note This should not normally be used by non-generated code.
*
* \sa wl_display, wl_event_queue, wl_proxy_marshal()
*
* \memberof wl_proxy
*/
WL_EXPORT struct wl_proxy *
wl_proxy_create(struct wl_proxy *factory, const struct wl_interface *interface)
{
struct wl_proxy *proxy;
struct wl_display *display = factory->display;
proxy = malloc(sizeof *proxy);
if (proxy == NULL)
return NULL;
proxy->object.interface = interface;
2011-02-18 15:28:54 -05:00
proxy->object.implementation = NULL;
proxy->display = display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
proxy->queue = factory->queue;
proxy->flags = 0;
proxy->refcount = 1;
pthread_mutex_lock(&display->mutex);
proxy->object.id = wl_map_insert_new(&display->objects, 0, proxy);
pthread_mutex_unlock(&display->mutex);
return proxy;
}
/* The caller should hold the display lock */
static struct wl_proxy *
wl_proxy_create_for_id(struct wl_proxy *factory,
uint32_t id, const struct wl_interface *interface)
{
struct wl_proxy *proxy;
struct wl_display *display = factory->display;
proxy = malloc(sizeof *proxy);
if (proxy == NULL)
return NULL;
proxy->object.interface = interface;
proxy->object.implementation = NULL;
proxy->object.id = id;
proxy->display = display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
proxy->queue = factory->queue;
proxy->flags = 0;
proxy->refcount = 1;
wl_map_insert_at(&display->objects, 0, id, proxy);
return proxy;
}
/** Destroy a proxy object
*
* \param proxy The proxy to be destroyed
*
* \memberof wl_proxy
*/
WL_EXPORT void
wl_proxy_destroy(struct wl_proxy *proxy)
{
struct wl_display *display = proxy->display;
pthread_mutex_lock(&display->mutex);
if (proxy->flags & WL_PROXY_FLAG_ID_DELETED)
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
wl_map_remove(&proxy->display->objects, proxy->object.id);
else if (proxy->object.id < WL_SERVER_ID_START)
wl_map_insert_at(&proxy->display->objects, 0,
proxy->object.id, WL_ZOMBIE_OBJECT);
else
wl_map_insert_at(&proxy->display->objects, 0,
proxy->object.id, NULL);
proxy->flags |= WL_PROXY_FLAG_DESTROYED;
proxy->refcount--;
if (!proxy->refcount)
free(proxy);
pthread_mutex_unlock(&display->mutex);
}
/** Set a proxy's listener
*
* \param proxy The proxy object
* \param implementation The listener to be added to proxy
* \param data User data to be associated with the proxy
* \return 0 on success or -1 on failure
*
* Set proxy's listener to \c implementation and its user data to
* \c data. If a listener has already been set, this function
* fails and nothing is changed.
*
* \c implementation is a vector of function pointers. For an opcode
* \c n, \c implementation[n] should point to the handler of \c n for
* the given object.
*
* \memberof wl_proxy
*/
WL_EXPORT int
wl_proxy_add_listener(struct wl_proxy *proxy,
void (**implementation)(void), void *data)
{
2011-02-18 15:28:54 -05:00
if (proxy->object.implementation) {
fprintf(stderr, "proxy already has listener\n");
return -1;
2011-02-18 15:28:54 -05:00
}
2011-02-18 15:28:54 -05:00
proxy->object.implementation = implementation;
proxy->user_data = data;
return 0;
}
/** Prepare a request to be sent to the compositor
*
* \param proxy The proxy object
* \param opcode Opcode of the request to be sent
* \param ... Extra arguments for the given request
*
* Translates the request given by opcode and the extra arguments into the
* wire format and write it to the connection buffer.
*
* The example below creates a proxy object with the wl_surface_interface
* using a wl_compositor factory interface and sends the
* \c compositor.create_surface request using \ref wl_proxy_marshal(). Note
* the \c id is the extra argument to the request as specified by the
* protocol.
*
* \code
* id = wl_proxy_create((struct wl_proxy *) wl_compositor,
* &wl_surface_interface);
* wl_proxy_marshal((struct wl_proxy *) wl_compositor,
* WL_COMPOSITOR_CREATE_SURFACE, id);
* \endcode
*
* \note This should not normally be used by non-generated code.
*
* \sa wl_proxy_create()
*
* \memberof wl_proxy
*/
WL_EXPORT void
wl_proxy_marshal(struct wl_proxy *proxy, uint32_t opcode, ...)
{
struct wl_closure *closure;
va_list ap;
pthread_mutex_lock(&proxy->display->mutex);
va_start(ap, opcode);
closure = wl_closure_vmarshal(&proxy->object, opcode, ap,
&proxy->object.interface->methods[opcode]);
va_end(ap);
if (closure == NULL) {
fprintf(stderr, "Error marshalling request\n");
abort();
}
if (wl_debug)
wl_closure_print(closure, &proxy->object, true);
if (wl_closure_send(closure, proxy->display->connection)) {
fprintf(stderr, "Error sending request: %m\n");
abort();
}
wl_closure_destroy(closure);
pthread_mutex_unlock(&proxy->display->mutex);
}
static void
display_handle_error(void *data,
struct wl_display *display, void *object,
uint32_t code, const char *message)
{
struct wl_proxy *proxy = object;
int err;
wl_log("%s@%u: error %d: %s\n",
proxy->object.interface->name, proxy->object.id, code, message);
switch (code) {
case WL_DISPLAY_ERROR_INVALID_OBJECT:
case WL_DISPLAY_ERROR_INVALID_METHOD:
err = -EINVAL;
break;
case WL_DISPLAY_ERROR_NO_MEMORY:
err = -ENOMEM;
break;
default:
err = -EFAULT;
break;
}
wl_display_fatal_error(display, err);
}
static void
display_handle_delete_id(void *data, struct wl_display *display, uint32_t id)
{
struct wl_proxy *proxy;
pthread_mutex_lock(&display->mutex);
proxy = wl_map_lookup(&display->objects, id);
if (!proxy)
wl_log("error: received delete_id for unknown id (%u)\n", id);
if (proxy && proxy != WL_ZOMBIE_OBJECT)
proxy->flags |= WL_PROXY_FLAG_ID_DELETED;
else
wl_map_remove(&display->objects, id);
pthread_mutex_unlock(&display->mutex);
}
static const struct wl_display_listener display_listener = {
display_handle_error,
display_handle_delete_id
};
static int
connect_to_socket(const char *name)
{
struct sockaddr_un addr;
socklen_t size;
const char *runtime_dir;
int name_size, fd;
runtime_dir = getenv("XDG_RUNTIME_DIR");
if (!runtime_dir) {
fprintf(stderr,
"error: XDG_RUNTIME_DIR not set in the environment.\n");
/* to prevent programs reporting
* "failed to create display: Success" */
errno = ENOENT;
return -1;
}
if (name == NULL)
name = getenv("WAYLAND_DISPLAY");
if (name == NULL)
name = "wayland-0";
fd = wl_os_socket_cloexec(PF_LOCAL, SOCK_STREAM, 0);
if (fd < 0)
return -1;
memset(&addr, 0, sizeof addr);
addr.sun_family = AF_LOCAL;
name_size =
snprintf(addr.sun_path, sizeof addr.sun_path,
"%s/%s", runtime_dir, name) + 1;
assert(name_size > 0);
if (name_size > (int)sizeof addr.sun_path) {
fprintf(stderr,
"error: socket path \"%s/%s\" plus null terminator"
" exceeds 108 bytes\n", runtime_dir, name);
close(fd);
/* to prevent programs reporting
* "failed to add socket: Success" */
errno = ENAMETOOLONG;
return -1;
};
size = offsetof (struct sockaddr_un, sun_path) + name_size;
if (connect(fd, (struct sockaddr *) &addr, size) < 0) {
close(fd);
return -1;
}
return fd;
}
/** Connect to Wayland display on an already open fd
*
* \param fd The fd to use for the connection
* \return A \ref wl_display object or \c NULL on failure
*
* The wl_display takes ownership of the fd and will close it when the
* display is destroyed. The fd will also be closed in case of
* failure.
*
* \memberof wl_display
*/
WL_EXPORT struct wl_display *
wl_display_connect_to_fd(int fd)
{
struct wl_display *display;
const char *debug;
debug = getenv("WAYLAND_DEBUG");
if (debug && (strstr(debug, "client") || strstr(debug, "1")))
wl_debug = 1;
display = malloc(sizeof *display);
if (display == NULL) {
close(fd);
return NULL;
}
memset(display, 0, sizeof *display);
display->fd = fd;
wl_map_init(&display->objects, WL_MAP_CLIENT_SIDE);
wl_event_queue_init(&display->queue, display);
wl_list_init(&display->event_queue_list);
pthread_mutex_init(&display->mutex, NULL);
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
pthread_cond_init(&display->reader_cond, NULL);
display->reader_count = 0;
wl_map_insert_new(&display->objects, 0, NULL);
display->proxy.object.interface = &wl_display_interface;
display->proxy.object.id =
wl_map_insert_new(&display->objects, 0, display);
display->proxy.display = display;
display->proxy.object.implementation = (void(**)(void)) &display_listener;
2011-02-18 15:28:54 -05:00
display->proxy.user_data = display;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
display->proxy.queue = &display->queue;
display->proxy.flags = 0;
display->proxy.refcount = 1;
display->connection = wl_connection_create(display->fd);
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
if (display->connection == NULL)
goto err_connection;
return display;
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
err_connection:
pthread_mutex_destroy(&display->mutex);
pthread_cond_destroy(&display->reader_cond);
wl_map_release(&display->objects);
close(display->fd);
free(display);
return NULL;
}
/** Connect to a Wayland display
*
* \param name Name of the Wayland display to connect to
* \return A \ref wl_display object or \c NULL on failure
*
* Connect to the Wayland display named \c name. If \c name is \c NULL,
* its value will bee replaced with the WAYLAND_DISPLAY environment
* variable if it is set, otherwise display "wayland-0" will be used.
*
* \memberof wl_display
*/
WL_EXPORT struct wl_display *
wl_display_connect(const char *name)
{
char *connection, *end;
int flags, fd;
connection = getenv("WAYLAND_SOCKET");
if (connection) {
fd = strtol(connection, &end, 0);
if (*end != '\0')
return NULL;
flags = fcntl(fd, F_GETFD);
if (flags != -1)
fcntl(fd, F_SETFD, flags | FD_CLOEXEC);
unsetenv("WAYLAND_SOCKET");
} else {
fd = connect_to_socket(name);
if (fd < 0)
return NULL;
}
return wl_display_connect_to_fd(fd);
}
/** Close a connection to a Wayland display
*
* \param display The display context object
*
* Close the connection to \c display and free all resources associated
* with it.
*
* \memberof wl_display
*/
WL_EXPORT void
wl_display_disconnect(struct wl_display *display)
{
wl_connection_destroy(display->connection);
wl_map_release(&display->objects);
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
wl_event_queue_release(&display->queue);
pthread_mutex_destroy(&display->mutex);
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
pthread_cond_destroy(&display->reader_cond);
if (display->fd > 0)
close(display->fd);
free(display);
}
/** Get a display context's file descriptor
*
* \param display The display context object
* \return Display object file descriptor
*
* Return the file descriptor associated with a display so it can be
* integrated into the client's main loop.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_get_fd(struct wl_display *display)
{
return display->fd;
}
static void
sync_callback(void *data, struct wl_callback *callback, uint32_t serial)
{
int *done = data;
*done = 1;
wl_callback_destroy(callback);
}
static const struct wl_callback_listener sync_listener = {
sync_callback
};
/** Block until all pending request are processed by the server
*
* \param display The display context object
* \return The number of dispatched events on success or -1 on failure
*
* Blocks until the server process all currently issued requests and
* sends out pending events on all event queues.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_roundtrip(struct wl_display *display)
{
struct wl_callback *callback;
int done, ret = 0;
done = 0;
callback = wl_display_sync(display);
wl_callback_add_listener(callback, &sync_listener, &done);
while (!done && ret >= 0)
ret = wl_display_dispatch(display);
if (ret == -1 && !done)
wl_callback_destroy(callback);
return ret;
}
static int
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
create_proxies(struct wl_proxy *sender, struct wl_closure *closure)
{
struct wl_proxy *proxy;
const char *signature;
struct argument_details arg;
uint32_t id;
int i;
int count;
signature = closure->message->signature;
count = arg_count_for_signature(signature);
for (i = 0; i < count; i++) {
signature = get_next_argument(signature, &arg);
switch (arg.type) {
case 'n':
id = closure->args[i].n;
if (id == 0) {
closure->args[i].o = NULL;
break;
}
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
proxy = wl_proxy_create_for_id(sender, id,
closure->message->types[i]);
if (proxy == NULL)
return -1;
closure->args[i].o = (struct wl_object *)proxy;
break;
default:
break;
}
}
return 0;
}
static void
increase_closure_args_refcount(struct wl_closure *closure)
{
const char *signature;
struct argument_details arg;
int i, count;
struct wl_proxy *proxy;
signature = closure->message->signature;
count = arg_count_for_signature(signature);
for (i = 0; i < count; i++) {
signature = get_next_argument(signature, &arg);
switch (arg.type) {
case 'n':
case 'o':
proxy = (struct wl_proxy *) closure->args[i].o;
if (proxy)
proxy->refcount++;
break;
default:
break;
}
}
}
static int
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
queue_event(struct wl_display *display, int len)
{
uint32_t p[2], id;
int opcode, size;
struct wl_proxy *proxy;
struct wl_closure *closure;
2010-09-01 17:18:33 -04:00
const struct wl_message *message;
wl_connection_copy(display->connection, p, sizeof p);
id = p[0];
opcode = p[1] & 0xffff;
size = p[1] >> 16;
if (len < size)
return 0;
proxy = wl_map_lookup(&display->objects, id);
if (proxy == WL_ZOMBIE_OBJECT) {
wl_connection_consume(display->connection, size);
return size;
} else if (proxy == NULL) {
wl_connection_consume(display->connection, size);
return size;
}
message = &proxy->object.interface->events[opcode];
closure = wl_connection_demarshal(display->connection, size,
&display->objects, message);
if (!closure)
return -1;
if (create_proxies(proxy, closure) < 0) {
wl_closure_destroy(closure);
return -1;
}
if (wl_closure_lookup_objects(closure, &display->objects) != 0) {
wl_closure_destroy(closure);
return -1;
}
increase_closure_args_refcount(closure);
proxy->refcount++;
closure->proxy = proxy;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
if (wl_list_empty(&proxy->queue->event_list))
pthread_cond_signal(&proxy->queue->cond);
wl_list_insert(proxy->queue->event_list.prev, &closure->link);
return size;
}
static void
decrease_closure_args_refcount(struct wl_closure *closure)
{
const char *signature;
struct argument_details arg;
int i, count;
struct wl_proxy *proxy;
signature = closure->message->signature;
count = arg_count_for_signature(signature);
for (i = 0; i < count; i++) {
signature = get_next_argument(signature, &arg);
switch (arg.type) {
case 'n':
case 'o':
proxy = (struct wl_proxy *) closure->args[i].o;
if (proxy) {
if (proxy->flags & WL_PROXY_FLAG_DESTROYED)
closure->args[i].o = NULL;
proxy->refcount--;
if (!proxy->refcount)
free(proxy);
}
break;
default:
break;
}
}
}
static void
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
dispatch_event(struct wl_display *display, struct wl_event_queue *queue)
{
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
struct wl_closure *closure;
struct wl_proxy *proxy;
int opcode;
bool proxy_destroyed;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
closure = container_of(queue->event_list.next,
struct wl_closure, link);
wl_list_remove(&closure->link);
opcode = closure->opcode;
/* Verify that the receiving object is still valid by checking if has
* been destroyed by the application. */
decrease_closure_args_refcount(closure);
proxy = closure->proxy;
proxy_destroyed = !!(proxy->flags & WL_PROXY_FLAG_DESTROYED);
proxy->refcount--;
if (proxy_destroyed) {
if (!proxy->refcount)
free(proxy);
wl_closure_destroy(closure);
return;
}
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
pthread_mutex_unlock(&display->mutex);
if (proxy->object.implementation) {
if (wl_debug)
wl_closure_print(closure, &proxy->object, false);
wl_closure_invoke(closure, WL_CLOSURE_INVOKE_CLIENT,
&proxy->object, opcode,
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
proxy->user_data);
}
wl_closure_destroy(closure);
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
pthread_mutex_lock(&display->mutex);
}
static int
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
read_events(struct wl_display *display)
{
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
int total, rem, size;
uint32_t serial;
display->reader_count--;
if (display->reader_count == 0) {
total = wl_connection_read(display->connection);
if (total == -1) {
if (errno != EAGAIN)
display_fatal_error(display, errno);
return -1;
} else if (total == 0) {
/* The compositor has closed the socket. This
* should be considered an error so we'll fake
* an errno */
errno = EPIPE;
display_fatal_error(display, errno);
return -1;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
}
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
for (rem = total; rem >= 8; rem -= size) {
size = queue_event(display, rem);
if (size == -1) {
display_fatal_error(display, errno);
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
return -1;
} else if (size == 0) {
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
break;
}
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
}
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
display->read_serial++;
pthread_cond_broadcast(&display->reader_cond);
} else {
serial = display->read_serial;
while (display->read_serial == serial)
pthread_cond_wait(&display->reader_cond,
&display->mutex);
}
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
return 0;
}
/** Read events from display file descriptor
*
* \param display The display context object
* \return 0 on success or -1 on error. In case of error errno will
* be set accordingly
*
* This will read events from the file descriptor for the display.
* This function does not dispatch events, it only reads and queues
* events into their corresponding event queues. If no data is
* avilable on the file descriptor, wl_display_read_events() returns
* immediately. To dispatch events that may have been queued, call
* wl_display_dispatch_pending() or
* wl_display_dispatch_queue_pending().
*
* Before calling this function, wl_display_prepare_read() must be
* called first.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_read_events(struct wl_display *display)
{
int ret;
pthread_mutex_lock(&display->mutex);
ret = read_events(display);
pthread_mutex_unlock(&display->mutex);
return ret;
}
static int
dispatch_queue(struct wl_display *display, struct wl_event_queue *queue)
{
int count;
if (display->last_error)
goto err;
for (count = 0; !wl_list_empty(&queue->event_list); count++) {
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
dispatch_event(display, queue);
if (display->last_error)
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
goto err;
}
return count;
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
err:
errno = display->last_error;
return -1;
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
}
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
WL_EXPORT int
wl_display_prepare_read_queue(struct wl_display *display,
struct wl_event_queue *queue)
{
int ret;
pthread_mutex_lock(&display->mutex);
if (!wl_list_empty(&queue->event_list)) {
errno = EAGAIN;
ret = -1;
} else {
display->reader_count++;
ret = 0;
}
pthread_mutex_unlock(&display->mutex);
return ret;
}
/** Prepare to read events after polling file descriptor
*
* \param display The display context object
* \return 0 on success or -1 if event queue was not empty
*
* This function must be called before reading from the file
* descriptor using wl_display_read_events(). Calling
* wl_display_prepare_read() announces the calling threads intention
* to read and ensures that until the thread is ready to read and
* calls wl_display_read_events(), no other thread will read from the
* file descriptor. This only succeeds if the event queue is empty
* though, and if there are undispatched events in the queue, -1 is
* returned and errno set to EBUSY.
*
* If a thread successfully calls wl_display_prepare_read(), it must
* either call wl_display_read_events() when it's ready or cancel the
* read intention by calling wl_display_cancel_read().
*
* Use this function before polling on the display fd or to integrate
* the fd into a toolkit event loop in a race-free way. Typically, a
* toolkit will call wl_display_dispatch_pending() before sleeping, to
* make sure it doesn't block with unhandled events. Upon waking up,
* it will assume the file descriptor is readable and read events from
* the fd by calling wl_display_dispatch(). Simplified, we have:
*
* wl_display_dispatch_pending(display);
* wl_display_flush(display);
* poll(fds, nfds, -1);
* wl_display_dispatch(display);
*
* There are two races here: first, before blocking in poll(), the fd
* could become readable and another thread reads the events. Some of
* these events may be for the main queue and the other thread will
* queue them there and then the main thread will go to sleep in
* poll(). This will stall the application, which could be waiting
* for a event to kick of the next animation frame, for example.
*
* The other race is immediately after poll(), where another thread
* could preempt and read events before the main thread calls
* wl_display_dispatch(). This call now blocks and starves the other
* fds in the event loop.
*
* A correct sequence would be:
*
* while (wl_display_prepare_read(display) != 0)
* wl_display_dispatch_pending(display);
* wl_display_flush(display);
* poll(fds, nfds, -1);
* wl_display_read_events(display);
* wl_display_dispatch_pending(display);
*
* Here we call wl_display_prepare_read(), which ensures that between
* returning from that call and eventually calling
* wl_display_read_events(), no other thread will read from the fd and
* queue events in our queue. If the call to
* wl_display_prepare_read() fails, we dispatch the pending events and
* try again until we're successful.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_prepare_read(struct wl_display *display)
{
return wl_display_prepare_read_queue(display, &display->queue);
}
/** Release exclusive access to display file descriptor
*
* \param display The display context object
*
* This releases the exclusive access. Useful for canceling the lock
* when a timed out poll returns fd not readable and we're not going
* to read from the fd anytime soon.
*
* \memberof wl_display
*/
WL_EXPORT void
wl_display_cancel_read(struct wl_display *display)
{
pthread_mutex_lock(&display->mutex);
display->reader_count--;
if (display->reader_count == 0) {
display->read_serial++;
pthread_cond_broadcast(&display->reader_cond);
}
pthread_mutex_unlock(&display->mutex);
}
/** Dispatch events in an event queue
*
* \param display The display context object
* \param queue The event queue to dispatch
* \return The number of dispatched events on success or -1 on failure
*
* Dispatch all incoming events for objects assigned to the given
* event queue. On failure -1 is returned and errno set appropriately.
*
* This function blocks if there are no events to dispatch. If calling from
* the main thread, it will block reading data from the display fd. For other
* threads this will block until the main thread queues events on the queue
* passed as argument.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_dispatch_queue(struct wl_display *display,
struct wl_event_queue *queue)
{
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
struct pollfd pfd[2];
int ret;
pthread_mutex_lock(&display->mutex);
ret = dispatch_queue(display, queue);
if (ret == -1)
goto err_unlock;
if (ret > 0) {
pthread_mutex_unlock(&display->mutex);
return ret;
}
ret = wl_connection_flush(display->connection);
if (ret < 0 && errno != EAGAIN) {
display_fatal_error(display, errno);
goto err_unlock;
}
display->reader_count++;
pthread_mutex_unlock(&display->mutex);
pfd[0].fd = display->fd;
pfd[0].events = POLLIN;
if (poll(pfd, 1, -1) == -1)
return -1;
pthread_mutex_lock(&display->mutex);
if (read_events(display) == -1)
goto err_unlock;
ret = dispatch_queue(display, queue);
if (ret == -1)
goto err_unlock;
pthread_mutex_unlock(&display->mutex);
return ret;
err_unlock:
pthread_mutex_unlock(&display->mutex);
return -1;
}
/** Dispatch pending events in an event queue
*
* \param display The display context object
* \param queue The event queue to dispatch
* \return The number of dispatched events on success or -1 on failure
*
* Dispatch all incoming events for objects assigned to the given
* event queue. On failure -1 is returned and errno set appropriately.
* If there are no events queued, this functions return immediately.
*
* \memberof wl_display
* \since 1.0.2
*/
WL_EXPORT int
wl_display_dispatch_queue_pending(struct wl_display *display,
struct wl_event_queue *queue)
{
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
pthread_mutex_lock(&display->mutex);
if (dispatch_queue(display, queue) == -1)
goto err_unlock;
pthread_mutex_unlock(&display->mutex);
return 0;
err_unlock:
pthread_mutex_unlock(&display->mutex);
return -1;
}
/** Process incoming events
*
* \param display The display context object
* \return The number of dispatched events on success or -1 on failure
*
* Dispatch the display's main event queue.
*
* If the main event queue is empty, this function blocks until there are
* events to be read from the display fd. Events are read and queued on
* the appropriate event queues. Finally, events on the main event queue
* are dispatched.
*
* \note It is not possible to check if there are events on the main queue
* or not. For dispatching main queue events without blocking, see \ref
* wl_display_dispatch_pending().
*
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
* \note Calling this will release the display file descriptor if this
* thread acquired it using wl_display_acquire_fd().
*
* \sa wl_display_dispatch_pending(), wl_display_dispatch_queue()
*
* \memberof wl_display
*/
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
WL_EXPORT int
wl_display_dispatch(struct wl_display *display)
{
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
return wl_display_dispatch_queue(display, &display->queue);
}
/** Dispatch main queue events without reading from the display fd
*
* \param display The display context object
* \return The number of dispatched events or -1 on failure
*
* This function dispatches events on the main event queue. It does not
* attempt to read the display fd and simply returns zero if the main
* queue is empty, i.e., it doesn't block.
*
* This is necessary when a client's main loop wakes up on some fd other
* than the display fd (network socket, timer fd, etc) and calls \ref
* wl_display_dispatch_queue() from that callback. This may queue up
* events in the main queue while reading all data from the display fd.
* When the main thread returns to the main loop to block, the display fd
* no longer has data, causing a call to \em poll(2) (or similar
* functions) to block indefinitely, even though there are events ready
* to dispatch.
*
* To proper integrate the wayland display fd into a main loop, the
* client should always call \ref wl_display_dispatch_pending() and then
* \ref wl_display_flush() prior to going back to sleep. At that point,
* the fd typically doesn't have data so attempting I/O could block, but
* events queued up on the main queue should be dispatched.
*
* A real-world example is a main loop that wakes up on a timerfd (or a
* sound card fd becoming writable, for example in a video player), which
* then triggers GL rendering and eventually eglSwapBuffers().
* eglSwapBuffers() may call wl_display_dispatch_queue() if it didn't
* receive the frame event for the previous frame, and as such queue
* events in the main queue.
*
* \note Calling this makes the current thread the main one.
*
* \sa wl_display_dispatch(), wl_display_dispatch_queue(),
* wl_display_flush()
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_dispatch_pending(struct wl_display *display)
{
client: Add wl_display_prepare_read() API to relax thread model assumptions The current thread model assumes that the application or toolkit will have one thread that either polls the display fd and dispatches events or just dispatches in a loop. Only this main thread will read from the fd while all other threads will block on a pthread condition and expect the main thread to deliver events to them. This turns out to be too restrictive. We can't assume that there always will be a thread like that. Qt QML threaded rendering will block the main thread on a condition that's signaled by a rendering thread after it finishes rendering. This leads to a deadlock when the rendering threads blocks in eglSwapBuffers(), and the main thread is waiting on the condition. Another problematic use case is with games that has a rendering thread for a splash screen while the main thread is busy loading game data or compiling shaders. The main thread isn't responsive and ends up blocking eglSwapBuffers() in the rendering thread. We also can't assume that there will be only one thread polling on the file descriptor. A valid use case is a thread receiving data from a custom wayland interface as well as a device fd or network socket. The thread may want to wait on either events from the wayland interface or data from the fd, in which case it needs to poll on both the wayland display fd and the device/network fd. The solution seems pretty straightforward: just let all threads read from the fd. However, the main-thread restriction was introduced to avoid a race. Simplified, main loops will do something like this: wl_display_dispatch_pending(display); /* Race here if other thread reads from fd and places events * in main eent queue. We go to sleep in poll while sitting on * events that may stall the application if not dispatched. */ poll(fds, nfds, -1); /* Race here if other thread reads and doesn't queue any * events for main queue. wl_display_dispatch() below will block * trying to read from the fd, while other fds in the mainloop * are ignored. */ wl_display_dispatch(display); The restriction that only the main thread can read from the fd avoids these races, but has the problems described above. This patch introduces new API to solve both problems. We add int wl_display_prepare_read(struct wl_display *display); and int wl_display_read_events(struct wl_display *display); wl_display_prepare_read() registers the calling thread as a potential reader of events. Once data is available on the fd, all reader threads must call wl_display_read_events(), at which point one of the threads will read from the fd and distribute the events to event queues. When that is done, all threads return from wl_display_read_events(). From the point of view of a single thread, this ensures that between calling wl_display_prepare_read() and wl_display_read_events(), no other thread will read from the fd and queue events in its event queue. This avoids the race conditions described above, and we avoid relying on any one thread to be available to read events.
2013-03-17 14:21:48 -04:00
return wl_display_dispatch_queue_pending(display, &display->queue);
}
2012-11-14 13:58:31 -05:00
/** Retrieve the last error occurred on a display
*
* \param display The display context object
2012-11-14 13:58:31 -05:00
* \return The last error occurred on \c display or 0 if no error occurred
*
2012-11-14 13:58:31 -05:00
* Return the last error occurred on the display. This may be an error sent
* by the server or caused by the local client.
*
* \note Errors are \b fatal. If this function returns non-zero the display
* can no longer be used.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_get_error(struct wl_display *display)
{
int ret;
pthread_mutex_lock(&display->mutex);
ret = display->last_error;
pthread_mutex_unlock(&display->mutex);
return ret;
}
/** Send all buffered requests on the display to the server
*
* \param display The display context object
* \return The number of bytes send on success or -1 on failure
*
* Send all buffered data on the client side to the server. Clients
* should call this function before blocking. On success, the number
* of bytes sent to the server is returned. On failure, this
* function returns -1 and errno is set appropriately.
*
* wl_display_flush() never blocks. It will write as much data as
* possible, but if all data could not be written, errno will be set
* to EAGAIN and -1 returned. In that case, use poll on the display
* file descriptor to wait for it to become writable again.
*
* \memberof wl_display
*/
WL_EXPORT int
wl_display_flush(struct wl_display *display)
{
int ret;
pthread_mutex_lock(&display->mutex);
if (display->last_error) {
errno = display->last_error;
ret = -1;
} else {
ret = wl_connection_flush(display->connection);
if (ret < 0 && errno != EAGAIN)
display_fatal_error(display, errno);
}
pthread_mutex_unlock(&display->mutex);
return ret;
}
/** Set the user data associated with a proxy
*
* \param proxy The proxy object
* \param user_data The data to be associated with proxy
*
* Set the user data associated with \c proxy. When events for this
* proxy are received, \c user_data will be supplied to its listener.
*
* \memberof wl_proxy
*/
WL_EXPORT void
wl_proxy_set_user_data(struct wl_proxy *proxy, void *user_data)
{
proxy->user_data = user_data;
}
/** Get the user data associated with a proxy
*
* \param proxy The proxy object
* \return The user data associated with proxy
*
* \memberof wl_proxy
*/
WL_EXPORT void *
wl_proxy_get_user_data(struct wl_proxy *proxy)
{
return proxy->user_data;
}
2012-04-27 11:31:07 -04:00
/** Get the id of a proxy object
*
* \param proxy The proxy object
* \return The id the object associated with the proxy
*
* \memberof wl_proxy
*/
2012-04-27 11:31:07 -04:00
WL_EXPORT uint32_t
wl_proxy_get_id(struct wl_proxy *proxy)
{
return proxy->object.id;
}
/** Get the interface name (class) of a proxy object
*
* \param proxy The proxy object
* \return The interface name of the object associated with the proxy
*
* \memberof wl_proxy
*/
WL_EXPORT const char *
wl_proxy_get_class(struct wl_proxy *proxy)
{
return proxy->object.interface->name;
}
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
/** Assign a proxy to an event queue
*
* \param proxy The proxy object
* \param queue The event queue that will handle this proxy
*
* Assign proxy to event queue. Events coming from \c proxy will be
* queued in \c queue instead of the display's main queue.
*
* \sa wl_display_dispatch_queue()
*
* \memberof wl_proxy
*/
client: Add wl_event_queue for multi-thread dispatching This introduces wl_event_queue, which is what will make multi-threaded wayland clients possible and useful. The driving use case is that of a GL rendering thread that renders and calls eglSwapBuffer independently of a "main thread" that owns the wl_display and handles input events and everything else. In general, the EGL and GL APIs have a threading model that requires the wayland client library to be usable from several threads. Finally, the current callback model gets into trouble even in a single threaded scenario: if we have to block in eglSwapBuffers, we may end up doing unrelated callbacks from within EGL. The wl_event_queue mechanism lets the application (or middleware such as EGL or toolkits) assign a proxy to an event queue. Only events from objects associated with the queue will be put in the queue, and conversely, events from objects associated with the queue will not be queue up anywhere else. The wl_display struct has a built-in event queue, which is considered the main and default event queue. New proxies are associated with the same queue as the object that created them (either the object that a request with a new-id argument was sent to or the object that sent an event with a new-id argument). A proxy can be moved to a different event queue by calling wl_proxy_set_queue(). A subsystem, such as EGL, will then create its own event queue and associate the objects it expects to receive events from with that queue. If EGL needs to block and wait for a certain event, it can keep dispatching event from its queue until that events comes in. This wont call out to unrelated code with an EGL lock held. Similarly, we don't risk the main thread handling an event from an EGL object and then calling into EGL from a different thread without the lock held.
2012-10-05 13:49:48 -04:00
WL_EXPORT void
wl_proxy_set_queue(struct wl_proxy *proxy, struct wl_event_queue *queue)
{
proxy->queue = queue;
}
WL_EXPORT void
wl_log_set_handler_client(wl_log_func_t handler)
{
wl_log_handler = handler;
}