Make sure newer clients can work with an older server:
- Add client and server versions in the activation
- On older server, clients needs to trigger peers without CAS of status
- On older server, jack transport is started with command.
- Use client version to know when to set the INACTIVE/FINISHED
state on the server instead.
- Async clients need to trigger peers on old server.
Also handle the relation between a node and the driver with pw_node_peer,
like we do with the links.
Because these are refcounted, we only make one peer for a node that is
linked to another node that is also the driver (pw-play -> sink) and we
save some fds as well as some admin stuff and overhead for the refcounts.
This in return then results in less problems getting all the refcounts
right when adding/removing nodes.
Before loading the node or device spa plugin, evaluate the node and
device rules so that we can use them to configure the plugin properties
when it is loaded.
Manage them like we do on the client and reuse logic. Make a node
function to safely add and remove a target.
Activate the targets from the process loop when we can be sure that we
can resume them. This avoids incrementing the pending state when we are
not going to be able to resume the nodes (like when the cycle is ongoing
and we have already been scheduled) and avoids glitches and xruns.
When a node is added to the poll loop, it can activate its own targets.
This is mostly for driver so that they have something to schedule and
can then activate the other targets.
Try to resume the target when it is removed and we are supposed to be
scheduled.
Also add targets to the target_list when the node is remote to make sure
the profiler can see the targets as well.
Keep the node in the INACTIVE state as long as the eventfd of the node
is not added to the loop. Skip nodes in the INACTIVE state from going to
the NOT_TRIGGERED status, which avoids scheduling the node.
Make sure we remove any local targets we have in a node when we export
it, we will receive new targets from the server.
This should eliminate any glitches when adding and removing nodes from
the graph.
See #4026, #2468
When for some reason we don't manage to transfer data from the source
or to the sink (timeout, scheduling problems..), try to do it when we
get a timeout to avoid xruns.
The module detects remote snapcast servers and creates a new sink
with protocol-simple for each server.
It sets up a new stream on the server for the sink with JSON-RPC.
Handle ipv6 addresses.
Support 0 port, which uses a free port to listen on.
Place the list of addresses we listen on as a property of the module so
that dynamically allocated ports can be retrieved.
Since `spa/utils/cleanup.h` is not a private header anymore, there is
no need for a separate `pipewire/cleanup.h` since the definitions of
the cleanup routines can now be moved into the respective headers.
This makes it possible to discover a local RAOP, pulse or RTP services
and connect to them.
IPv6 addresses need the interface appended to local addresses to
make the connection work.
Add capture.props and playback.props to configure the created streams
with arbitrary properties.
Improve format parsing, make it possible to have different formats peer
stream.
Improve some of the property handling.
This can now also be used to upload a stream to a snapcast server, add
an example of this to the docs.
Websites like squig.link or https://www.autoeq.app/ generate a file for
parametric equalization for a given target, but this is not a format
that can be directly given to filter chain module.
This module translates the file to filter chain module arguments and
then loads the filter chain module with these arguments.
Don't spam the warning about kernel missing features required for snap
on every pulseaudio connection, but instead show it only once, as the
situation is not going to improve.
Change the GenericFd data type to SyncObj. It's probably better to
explicitly state the data type than to make something generic. Otherwise
we would need to transfer the specific fd type somewhere else and there
is no room for that in the buffer and the the metadata is not a good idea
either because it can be modified and corrupted at runtime.
Add the SyncTimeline metadata. This contains 2 points on two timelines
(SyncObj datas in the buffer). The buffer can be accessed when the
acquire_point is signaled on the timeline and when the buffer
can be released, the release_point on the timeline should be signaled.
Expose the acquire_loop/release_loop functions and use them in the
modules.
Make sure the nodes created from the module use the same data loop as
the module. We need to ensure this because otherwise, the nodes might
be scheduled on different data loops and the invoke or timer logic will
fail.
Since we don't follow updates of the params on the mixer but only on the
port, we might get out of sync and fail to negotiate.
Going through the mixers for everything needs some more work.
Fixes#3971
When resample.disabled=true, which is now the default, Format has zero
rate, so latency buffers get zero size. The rate in this case is the
graph rate.
Fix by just using the delay in samples, as all streams must in any case
run at same rate for the combining to work.
Fixes: bff252ce60 ("combine-stream: actually make use of resample.disable")
When node.async is set, make the node async.
Advertize SPA_IO_AsyncBuffers on mixer ports when supported. Set a new
port flag when AsyncBuffer is supported on the port.
When making a link and if one of the nodes is async and the linked ports
support AsyncBuffer, make the link async and set this as a property on
the link. For async nodes we will use SPA_IO_AsyncBuffers on the mixer
ports.
Nodes that are async will not increment the peer required counters. This
ensures that the peer can start immediately before the async node is
ready.
On an async link, writers will write to the (cycle+1 & 1) async buffers
entry and readers will read from (cycle & 1). This makes the readers read
from the previously filled area.
We need to have two very controlled areas with specific rules for who
reads and who writes where because the two nodes will run concurrently
and no special synchronization is possible otherwise.
These async nodes can be paused and blocked without blocking or xrunning
the rest of graph. If the node didn't produce anything when the next
cycle starts, the graph will run with silence.
See #3509
Go through the mixers of the port to get the params.
This makes it possible to let the mixer decide on formats, buffers and
io areas.
Currently, the format is the same on all mixer input and output ports
and the buffers are shared on the output port but the idea is to make it
possible to have different formats and buffers per link.