Firstly, separate the message dropping logic into
its own `drop_from_out_queue()` function.
Secondly, do not check earlier messages if the NEW
event for a particular object has been reached while
processing a REMOVE event for that object.
Thirdly, if - while processing a REMOVE event -
the corresponding NEW event is found and dropped,
drop the REMOVE event as well.
See #1840
Parse the quantum_limit parameters and use this to scale the buffers so
that they can contain the maximum allowed samples instead of the
hardcoded 8192 value.
See #1931
Also scale the max_quantum with the selected rate. Add a new
quantum_limit property that is the upper limit of the quantum regardless
of the sample rate, this is usually the allocated buffer size.
See #1931
Based on patch from Barnabás Pőcze <pobrn@protonmail.com>
Instead of trying to keep track of the missing bytes ourselves, use the
simple tlength - avail - requested formula to request more bytes from
the client.
Fixes#1981
Extend the server.address property so that you can also specify
an object per server. Add support for configuring some aspects of the
server such as max-clients and backlog.
Most importantly, the pipewire client.access can be configured per
server.
See #1960
Make the alignment parameter optional when negotiating buffers.
Default to a 16 bytes alignment and adjust for the max cpu
alignment.
Remove the useless align buffer parameter in plugins, we always
set it to 16 anyway.
For example, pulseaudio.js[1] immediately sends a
GET_SERVER_INFO request after AUTH, and only later
issues a SET_CLIENT_NAME.
See #1966.
[1]: https://github.com/janakj/pulseaudio.js
By default require that a client is authenticated and
has a manager to be allowed to run a command.
Specially:
* AUTH requires nothing
* SET_CLIENT_NAME and STAT only require authentication
Two `pw_properties` objects are not freed in the error path.
Resolves Coverity issues: 1468665, 1468666, 1468667, 1468668.
Furthermore, the module argument string is also not freed.
Keep track of the created services in two lists: published, pending.
Move services between the lists as the avahi client's state changes:
keep services in the pending list until the avahi daemon appears on dbus,
move them to the pending list if connection is lost,
and re-publish them after reconnection.
When a module's `load()` fails, its `unload()` will unconditionally
be called. Freeing resource in `load()` while not marking those
resources freed (e.g. setting the pointers to NULL) will result
in double-free when the module's `unload()` method is called.
Whether the object is a sink or source is already queried at the
beginning of the function, and is kept in local variables.
Use those instead of calling `pw_manager_object_is_{sink,source}()` again.
Do not use the client's connection to create the adapter object,
instead, create a new connection. This avoids the need for setting
object.linger=true, which guarantees that when the pulse server goes
down, the null sink is cleaned up.
While it is not a problem since `module_free()` calls
`pw_work_queue_cancel()`, it is completely unnecessary
to do it more than once.
Introduce a new flag on the module which stores whether or
not an unloading has been scheduled.
Since `module_list` is a fixed-sized array, `SPA_FOR_EACH_ELEMENT()`
can be used. So use that. This way there is no need for explicit
indexing nor a sentinel at the end.
The module-roc-{sink,source} modules simply load the corresponding
native pipewire modules, they have no dependency on ROC.
So always compile them. This way these modules are
compile tested, and if the corresponding pipewire
modules are added to the system later, they will work
with no changes to the protocol-pulse module.
Previously, when a sample is "committed" from an upload stream,
its reference count is set to 1. This is problematic if,
when the sample is committed for a second time, there are
streams playing the sample as the reference count will go out
of sync.
The problem can be easily triggered, especially with longer samples:
pactl upload-sample a-long-sample.ogg
pactl play-sample a-long-sample
pactl play-sample a-long-sample
pactl upload-sample a-long-sample.ogg # while playing
When the first stream finishes playing, it will free the sample,
which can cause problems e.g. for the second stream playing the
sample at that very moment.
Fix that by decoupling the buffer from the sample and making
the buffer reference counted. And remove the reference counting
from the samples as it is no longer needed.
Futhermore, previously, the reference counts were ignored
when the removal of a sample was requested. That is fixed
as well.
The previous issue can be triggered easily as well:
pactl upload-sample a-long-sample.ogg
pactl play-sample a-long-sample
pactl remove-sample a-long-sample # while playing
Fixes#1953
Create a new event for modules ('destroy') which is emitted from
`module_free()`. It is used by the module loading logic, to handle
when a module is destroyed without properly loading first.
Store the modules whose load has been initiated by a particular
client in the `pending_modules` list of the client. When the
client disconnects, "detach" the client from the pending module
objects. This way the reference count need not be increased
for asynchronous module loads.
Furthermore, if the module can load synchronously, do not create
the pending module object at all.
Only call `spa_list_remove()` in `stream_free()` if the
stream is pending. `spa_list_remove()` does not reinitialize
the list node, therefore calling `spa_list_remove()` again
after the stream has been removed from the pending list
will corrupt the pending list of the client.