Make sure newer clients can work with an older server:
- Add client and server versions in the activation
- On older server, clients needs to trigger peers without CAS of status
- On older server, jack transport is started with command.
- Use client version to know when to set the INACTIVE/FINISHED
state on the server instead.
- Async clients need to trigger peers on old server.
Manage them like we do on the client and reuse logic. Make a node
function to safely add and remove a target.
Activate the targets from the process loop when we can be sure that we
can resume them. This avoids incrementing the pending state when we are
not going to be able to resume the nodes (like when the cycle is ongoing
and we have already been scheduled) and avoids glitches and xruns.
When a node is added to the poll loop, it can activate its own targets.
This is mostly for driver so that they have something to schedule and
can then activate the other targets.
Try to resume the target when it is removed and we are supposed to be
scheduled.
Also add targets to the target_list when the node is remote to make sure
the profiler can see the targets as well.
Keep the node in the INACTIVE state as long as the eventfd of the node
is not added to the loop. Skip nodes in the INACTIVE state from going to
the NOT_TRIGGERED status, which avoids scheduling the node.
Make sure we remove any local targets we have in a node when we export
it, we will receive new targets from the server.
This should eliminate any glitches when adding and removing nodes from
the graph.
See #4026, #2468
Change the GenericFd data type to SyncObj. It's probably better to
explicitly state the data type than to make something generic. Otherwise
we would need to transfer the specific fd type somewhere else and there
is no room for that in the buffer and the the metadata is not a good idea
either because it can be modified and corrupted at runtime.
Add the SyncTimeline metadata. This contains 2 points on two timelines
(SyncObj datas in the buffer). The buffer can be accessed when the
acquire_point is signaled on the timeline and when the buffer
can be released, the release_point on the timeline should be signaled.
When node.async is set, make the node async.
Advertize SPA_IO_AsyncBuffers on mixer ports when supported. Set a new
port flag when AsyncBuffer is supported on the port.
When making a link and if one of the nodes is async and the linked ports
support AsyncBuffer, make the link async and set this as a property on
the link. For async nodes we will use SPA_IO_AsyncBuffers on the mixer
ports.
Nodes that are async will not increment the peer required counters. This
ensures that the peer can start immediately before the async node is
ready.
On an async link, writers will write to the (cycle+1 & 1) async buffers
entry and readers will read from (cycle & 1). This makes the readers read
from the previously filled area.
We need to have two very controlled areas with specific rules for who
reads and who writes where because the two nodes will run concurrently
and no special synchronization is possible otherwise.
These async nodes can be paused and blocked without blocking or xrunning
the rest of graph. If the node didn't produce anything when the next
cycle starts, the graph will run with silence.
See #3509
Change the flag from MAPPABLE to UNMAPPABLE to ease compatibility.
Older servers with newer client will not set the flag and so memory is
mappable for the client.
Newer server will set the flag but the client will ignore it and act
like before.
This flags means that the fd can be mmaped without special handling. It
is the equivalent of the SPA_DATA_FLAG_MAPPABLE. Refuse to map memory
that is not mappable.
Make sure we make all allocated MemFd memory MAPPABLE by default. We can
then remove the stream and filter special handling for MemFd types and
just check the more generic MAPPABLE flag.
Make one exception when a client uploads MemFd buffer memory. We must
manually set the MAPPABLE flag for MemFd to make things backwards
compatible.
clear_port() clears all struct mix and removes the port, and can occur
before port io is set to NULL. In this case the port memmaps are not
freed, and are leaked until client pool closes.
Fix by freeing the old io memmap when setting io to NULL, also when the
port was already cleared, so that the memmap lifecycle is the same as
that of the IO.
Avoid leaking buffers when freeing mix, in case the port was not cleared
properly.
These leaks don't seem to be occurring currently, but better be sure.
The remote end may destroy the port via client_node_port_update(),
before corresponding pw_impl_port_mix are released.
clear_port() removes all struct mix, but this prevents the
pw_impl_port_mix from being removed from io_map, which causes stale mix
ids be left in io_map, so we end up continuously allocating new io
areas.
Make lifecycle of io_map entries match port_init_mix/release_mix
exactly, separately from the lifecycle of the port and struct mix.
When freeing struct mix in port_release_mix(), make sure it corresponds
to the mix being released.
First check if all of the new buffers are ok before attemping to replace
our exising ones with the new ones. Else we might end up copying some
of the new buffers and cleaning them up twice later.
struct mix contain pointers to themselves (see do_port_use_buffers) and
cannot be copied by value, so they should not be stored in pw_array.
Store them in pw_map instead.
Remove the context_driver events and replace them with realtime node
events. The problem is that the realtime node events are emitted from
the node data thread, which can be different for each node and
aggregating them into context_driver events is not a good idea.
It's also nice for the stream drained event, which no longer needs to go
through the context_driver events.
Remove some includes of private.h
Add some methods to get the mempool of client and context so that we can
remove direct access.
Move some things around.
Use methods to get pw_loop variables.
See #3243
Use the port_set_mix_info to add and remove mix info information to the
client.
Previously it was impossible to clean up mix_info.
With this change we can also simplify the jack peer port detection.
Because the mix info is always sent before the link appears we can
simply look up the info when the link appears.
There is already an id in the port_mix structure that can be used to
index the mix structures in map if needed, the mix_id is the port_id of
the mixer and should not be confused.
Bump the client-node version because we use the writefd differently now.
Support driver nodes using the old version somewhat. The stats will be
wrong but then again, we don't have any flatpak driver nodes that could
use an older version.
Don't make an extra eventfd for activating the remote-node, we can
use the server side eventfd and send them to the remote side using
the transport.
The remote node already adds the eventfd to the data-loop so avoids
doing the same on the server.
This makes driver nodes trigger all remote nodes directly instead of
going through an intermediate eventfd. For resuming nodes, we already
used the node eventfd directly so this only a small optimization
for the initial cycle start.
Pass the ready status to the client-node using the state array.
Don't just SPA_STATUS_HAVE_DATA on the server side but use the value
from the client.
This avoids some potential extra work when a driver sink pulls in data
with the NEED_DATA ready callback but then the server performs the
actions (tee) as if it were SPA_STATUS_HAVE_DATA.
First set some of the port flags and data, especially the owner_data
before calling pw_impl_port_set_mix(), which will use the owner_data
to send the mix_info.
See #2847