Have monitor streams not affect the latency calculation of the ports
they are connected to.
Connecting monitor streams to ports (e.g. volume level monitoring)
should not affect latency values of other streams connected to those
ports.
This fixes e.g. just running pavucontrol from modifying latencies of all
existing ports.
Add port.ignore-latency prop, which if true causes peer ports to ignore
the latency of the given port.
This is useful for ports that are not intended to affect latency
calculations of other ports, such as ports in monitor streams.
It is possible that we destroyed the source/sink when we get a latency
update from jack, don't try to update the source/sink in that case or
we will crash.
Let the server calculate signal time when it starts the graph. Otherwise
we overwrite old values and we can't do stats.
We might be able to piggyback the signal time in the prev_signal_time
field later.
Don't make an extra eventfd for activating the remote-node, we can
use the server side eventfd and send them to the remote side using
the transport.
The remote node already adds the eventfd to the data-loop so avoids
doing the same on the server.
This makes driver nodes trigger all remote nodes directly instead of
going through an intermediate eventfd. For resuming nodes, we already
used the node eventfd directly so this only a small optimization
for the initial cycle start.
Add latencyOffsetNsec prop to the combine node.
This is mainly useful for BAP device sets; the property appears in
Pulseaudio UI only when the node is associated with a device.
The trigger operation decrements the activation count on a node and
signals the eventfd when 0.
Implement pw_stream_trigger_process() with this new function.
Make the 3 types of trigger operations on a stream more explicit.
trigger: -> do node_trigger()
driver/driving -> start graph with ready callback
other: -> emit request trigger event.
Don't call into the node process directly but use the eventfd to wake up
the node.
This is slightly slower and causes some change in behaviour
because we now need to go back to the poll loop and then let the node be
scheduled.
It is however nicer to have a uniform way to wake up nodes and it
opens up some new possibilities such as scheduling nodes in their own
threads on the server.
When flushing a capture stream we move all dequeued buffers to the
queued queue for recycling. The dequeued queue is filled again when
captured buffers become available. Otherwise we might never recycle
some dequeued buffers.
Don't go through the signal_func when we need to complete the graph
or when we need to process the node, do directly to process_node.
The signal_func is really only for nodes activating peers.
For client-nodes that use trigger, set the signal and wakeup time when
they start the server node. Also set finish time before we resume the
peers on the server.
Client-nodes should really resume the peers directly without going
through the server but this is something to improve later.
Currently, RAOP sinks referencing the same remote ip and port may be created multiple times:
One each for IPv4 and IPv6, times the number of network interfaces used for mDNS discovery.
A recent change added `(IPv4)`and `(IPv6)`identifiers to the sinks' pretty names, however that
is misleading, as often times the service advertised through an mDNSv6 record is actually an
IPv4 service (i.e. the IP reference contained in the IPv6 record may be an IPv4 address).
With this change, sink creation is skipped if a sink with the same advertised name already exists.
Determine application executable file so that the result can be trusted,
and the file exists in the current namespace.
Don't use /proc/pid/cmdline, since that contains whatever was specified
by the exec() call.
Only stop processing the ready callback if we are a driver and not currently
driving the graph.
Streams that use trigger will also emit the ready callback and are not
driving the graph (but also are not a driver) and should therefore be
allowed to continue to resume_node to schedule the peer nodes.
See #3184
Stop our own data-loop and enter/iterate/leave it from the jack thread.
This runs all our nodes in the JACK thread and removes 2 context
switches (jack to and from pw thread).
We can possibly do this nicer by only pushing our own streams onto a
new custom data-loop but that's for later.
Make a node implementation and export it, just like we do for the
stream. This way we can use the node to implement set_active().
Tweak the draining logic like pw_stream.
Add a new JACK sink/source pair that translates to a single JACK
client.
The JACK playback port appears as PipeWire source and is processed
directly, synchronously, through the complete pipewire graph into
the PipeWire sink that is then made available on the JACK capture
ports.
Because all this happens in the same JACK cycle with no delay, the
latency is 0. A jack_iodelay on the JACK server has exactly the same
latency as the jack_iodelay on the PipeWire side.
The PipeWire sink and source are forced into the same rate and
buffer_size as the JACK server and can't dynamically change.
This only supports Audio for now.
When we consumed all the buffer data, don't clear all the fds but only
those that were already consumed in the message. It is possible that we
already have fds for the next message and we don't want to discard
those.
Fixes some intermittend memory map errors.
Calculate the stats at the start of the new cycle. The results will be
about the previous cycle but this gives more accurate results because
we can also include awake and finish times of remote nodes.
Make sure not to change the status of the activation in the ready event
so that we don't overwrite the status of the last cycle yet.
This means we can always set the AWAKE and awake_time, the remote node
might update it when triggered but that's ok.
After processing we can update the FINISHED state for non-remote nodes,
the remote nodes will update it after they complete the process
function.