Keep track of the node active state, when both are active, we can
prepare (negotiate) the link.
After a link has been prepared we can activate it. When we deactivate
the link, we don't need to prepare again.
When a port loses buffers or format, set the link back to the
unprepared state.
This fixes the case where:
1) a node becomes inactive and goes to suspend, the link becomes
unprepared
2) the node becomes active again and need to be prepared again
Keep the first active node as the fallback node. We use this node
as the target when no oher active nodes exist.
Only assign the target node to active nodes.
When we assign an unassigned node, simply update the active followers
of the target, the state of the nodes and target itself will then
be taken care of later.
When registering nodes, only active nodes can influence the state of the
graph.
Update some comments
Add reason to why we recalculate the graph for debugging purposes
Only recalculate graph when something relevant changed.
Block recalc from being called recursively.
Keep track of when a link is prepared, this is when the link has
successfully negotiated a format and buffers.
Only follow prepared links when collecting nodes in the graph.
Set the state of the driver and its nodes based on how many active
nodes the driver has. We don't have to do state changes on the nodes
from the link anymore then and we can get rid of the counters.
Only set the io on the mixer ports when prepared because we might
need a special mixer element based on the format.
Remove passive links for now.
This fixes many cases where the graph would stall when linking/unlinking
ports in various combinations.
Fixes#221
Rename (the non-exported symbol) _unref_id -> _remove_id and make it
remove the id from the map of known ids. This way, the server can send
the remove_mem and reuse the id for new memory before all references
are gone. Fixes "invalid mem id X, expected Y" errors.
Rework the metadata implementation without pw_properties to make
it easier to delete all subjects and implement the metadata API.
Remove metadata from all objects when they are destroyed.
When the resource does add_listener, send a message to the proxy
to trigger an emission of properties.
Block the client until all properties have been notified, track
this with a ping event to the implementation.
Setting the size on input stream buffers based on the elapsed
ticks does not give a meaningfull value for the queue size so
just leave it to the user to set the size field.
By default, the pipewiresrc tries to negotiate 16 buffers. This value is
hard coded in the pipewiresrc. If the buffers are large, this could lead
to an undesirably high memory usage. Applications that know about the
buffer size and that fewer buffers are sufficient should be able to
configure the limits for the number of buffers that are negotiated.
Therefore, add the min-buffers and max-buffers properties to the
pipewiresrc to enable applications to configure limits for the number of
negotiated buffers.
Make a new DRAINED status.
Place the DRAINED status on an input IO when a stream is out of
buffers and draining.
All nodes that don't have HAVE_DATA on the input io need to copy
it to the output io and return the status. This makes sure the
DRAINED is forwarded and nodes return DRAINED from _process()
DRAINED on the resampler flushes out the last queued samples and then
forwards the DRAINED in the next iteration.
Emit a new drained signal from the context when a node returns
DRAINED. Use this to trigger the drained signal in the stream.
Abuse the xrun callback in the adapter to emit the drained signal until
almost all data left the resampler. This needs more work with a proper
signal and a buffer flag to signal the drain.
This tests exporting a PW_TYPE_INTERFACE_Endpoint and binding
a proxy for it through the registry, verifying that info and params
are propagated properly from one to the other
The info structure needs to be cached because there is no way to
request it from the implementation, unless we hack the add_listener
API to be used for making info requests or add a new method that
will be used just in the implementation (both are bad ideas).
The params are cached because
1) a client doing enum_params + sync will not work correctly, since
the sync call syncs with the server and not the implementation...
we could block the client to solve that, but then there is also #2
2) the implementation is not aware of the clients and therefore
it cannot keep track of who is subscribed and who is not, this
needs to happen in the server. Then if we only keep track of the
subscriptions in the server and keep requesting params from the
impl, there is no way to know if a param event coming from the
impl matches a call to enum_params or to subscribe or if it's
just an update that needs to be forwarded to subscribers.