Now that we have the necessary infrastructure to memexport and
mempimport a memfd memblock, extend that support higher up in the
chain with pstreams.
A PA endpoint can now _transparently_ send a memfd memblock to the
other end by simply calling pa_pstream_send_memblock() – provided
the block's memfd pool was earlier registered with the pstream.
If the pipe does not support memfd transfers, we fall back to
sending the block's full data instead of just its reference.
** Further details:
A single pstream connection usually transfers blocks from multiple
pools including the server's srbchannel mempool, the client's
audio data mempool, and the server's global core mempool.
If these mempools are memfd-backed, we now require registering
them with the pstream before sending any blocks they cover. This
is done to minimize fd passing overhead and avoid fd leaks.
Moreover, to support all these pools without hard-coding their
number or nature in the Pulse communication protocol itself, a new
REGISTER_MEMFD_SHMID command is introduced. That command can be
sent _anytime_ during the pstream's lifetime and is used for
creating on demand SHM ID to memfd mappings.
Suggested-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
Color global mempools with a special mark. This special marking
is needed for handling memfd-backed pools.
To avoid fd leaks, memfd pools are registered with the connection
pstream to create an ID<->memfd mapping on both PA endpoints.
Such memory regions are then always referenced by their IDs and
never by their fds, and so their fds can be safely closed later.
Unfortunately this scheme cannot work with global pools since the
registration ID<->memfd mechanism needs to happen for each newly
connected client, and thus the need for a more special handling.
That is, for the pool's fd to be always open :-(
Almost all mempools are now created on a per-client basis. The
only exception is the pa_core's mempool which is still shared
between all clients of the system.
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
Soon we're going to have three types of memory pools: POSIX shm_open()
pools, memfd memfd_create() ones, and privately malloc()-ed pools.
Thus introduce annotations for the memory types supported and change
pa_mempool_new() into a factory method based on required memory.
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
In future commits, server-wide SHMs will be replaced with per-client
ones that will be dynamically created and freed according to clients
connections open and close.
Meanwhile, current PA design does not guarantee that the per-client
mempool blocks are referenced only by client-specific objects.
Thus reference count the pools and let each memblock inside the pool
itself, or just attached to it, increment the pool's refcount upon
allocation. This way, per-client mempools will only be freed when no
further component in the system holds any references to its blocks.
DiscussionLink: https://goo.gl/qesVMV
Suggested-by: Tanu Kaskinen <tanuk@iki.fi>
Suggested-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Ahmed S. Darwish <darwish.07@gmail.com>
FSF addresses used in PA sources are no longer valid and rpmlint
generates numerous warnings during packaging because of this.
This patch changes all FSF addresses to FSF web page according to
the GPL how-to: https://www.gnu.org/licenses/gpl-howto.en.html
Done automatically by sed-ing through sources.
Runs four tests:
1) Small packets, iochannel
2) Big packets, iochannel
3) Small packets, srbchannel
4) Big packets, srbchannel
Signed-off-by: David Henningsson <david.henningsson@canonical.com>