The graph will not switch quantums when there is an active node
with the node.lock-quantum property set to true.
It can be used to stop certain jack clients from crashing when the
quantum changes.
Hard rate changes (the default now) makes the graph suspend so that the
driver reopens the device with the new sample rate if possible. This
causes a glitch.
Soft rate changes works as before and just changes the clock rate of
the position area, which makes everything resample things to the new
rate without a glitch.
Sample rate changes can currently still only be triggered by
configuring the settings metadata clock.force-rate
Make a new PW_KEY_NODE_WANT_DRIVER to assign the node to a running
driver. (does not work yet)
Use a new variable to hold the ALWAYS_PROCESS setting and also
update want_driver.
This makes things it a bit more future proof.
This is obsolete since the move to doxygen groups and having \memberof blocked
the documentation in the source file from being merged with the header file.
Make structure with defaults that holds the defaults as they are loaded
from the config file or initialized with default values.
Copy this structure to a settings version that is used at runtime.
Add a force-quantum and force-rate field in the settings that can be
used to force a quantum and samplerate if != 0.
Check if the port has latency param and only try to set the
latency param when it appears available. This avoids sending unknown
latency params to an old client and erroring out.
Implement a port recalculate latency method that takes the min
and max latency of all peer ports and sets that as the new port
latency.
When a link is made, let the output and input port recalculate
latencies.
Pass latency param in audioconvert.
SPA_MEMBER is misleading, all we're doing here is pointer+offset and a
type-casting the result. Rename to SPA_PTROFF which is more expressive (and
has the same number of characters so we don't need to re-indent).
Make a method to get a work-queue from the context. There is one
work-queue to use and the context will allocate it when requested
and free when destroyed.
The work queue is handy to delay execution of some logic for later,
either in the next iteration of the main-loop or when an async
operation completed.
Export some work-queue methods.
Add new context config clock.power-of-two-quantum to make it possible to
not round quantum to closest lower power of two. This makes it possible
to match the quantum of a source node with the quantum of a client
node.
Only activate the links when the node is added to a driver. Otherwise
the driver will expect us to decrement the activation counters for the
links but that won't happen because the node will not be processed
yet.
Instead we activate the links that were not activated yet when the
node is activated.
This improves scheduling for newly added links and nodes.
node.latency also influences the pipeline latency in that it can
push the latency above the default value.
node.max-latency, instead, is only used to clamp the final latency
of the pipeline.
Move the daemon config file loading to a new conf.c file used by
the context to load the configuration. This replaces the module
profiles and some hacks to move properties around.
If there is nothing other specified with $PIPEWIRE_CONFIG_NAME or
a property, the client.conf file is loaded as a fallback.
Update the session manager config file to load the modules via the
config now. Rename the session modules section to another name.
Update pipewire-pulse to also load a specific pulse property file.
This then makes it pssible to assign specific RT priorities for the
pipewire-pulse process.
Make it possible to add a NULL param to the pending list. The NULL
param removes all previous updates.
When applying the updates, the NULL param removes the params from
the target list.
For the cached params in the node/device/port we need to be careful
because multiple clients might ask for updates concurrently. Clear
the pending list whenever a new param update starts so that we always
only keep the last set of updates.
This has two advantages: it actually removes params that become
unreadable or that got removed and it allows us to update the target
list more efficiently in one single loop.
Make it possible to add a NULL param to the pending list. The NULL
param removes all previous updates.
When applying the updates, the NULL param removes the params from
the target list.
This has two advantages: it actually removes params that become
unreadable or that got removed and it allows us to update the target
list more efficiently in one single loop.
Make a new complete event and use it instead of the start event
Use the start event at the start of the cycle
Make the profiler also log incomplete graph cycles
Set the destroyed flag on globals and make sure they are not
available anymore during the signal emissions. One such instance
is where a global is destroyed, the resources are destroyed,
a waiting client is resumed, the clients asks to bind to the
global and causes an error.
See #298
When the core is disconnected, first do _remove on all active
proxies and then run the _destroy handles. This makes it easier
to clean up dependent resources.
Remove the modules after removing all objects. It's harder for
odules to track and remove all objects.
Keep track if the proxy is still in the core object map or not
and make sure to only delete it once.
In _remove, just remove the item from the object map if the proxy
is destroyed. There is no need to call _proxy_destroy again (and
do the extra unref)