It is inherently racy, and we have a better way to ensure that
we won't autostart the service:
dbus_message_set_auto_start()
So use that.
This commit also adds a missing call to `dbus_pending_call_unref()`
and indirectly fixes a type mismatch (`dbus_bool_t` vs. `bool`)
that was present in `is_dbus_service_running()`.
The DBusError passed to `dbus_set_error_from_message()` must
be initialized, otherwise libdbus aborts:
dbus[129473]: arguments to dbus_set_error_from_message() were incorrect,
assertion "(error) == NULL || !dbus_error_is_set ((error))"
failed in file dbus-message.c line 4043.
This is normally a bug in some application using the D-Bus library.
It is possible that we destroyed the source/sink when we get a latency
update from jack, don't try to update the source/sink in that case or
we will crash.
Let the server calculate signal time when it starts the graph. Otherwise
we overwrite old values and we can't do stats.
We might be able to piggyback the signal time in the prev_signal_time
field later.
Don't make an extra eventfd for activating the remote-node, we can
use the server side eventfd and send them to the remote side using
the transport.
The remote node already adds the eventfd to the data-loop so avoids
doing the same on the server.
This makes driver nodes trigger all remote nodes directly instead of
going through an intermediate eventfd. For resuming nodes, we already
used the node eventfd directly so this only a small optimization
for the initial cycle start.
Add latencyOffsetNsec prop to the combine node.
This is mainly useful for BAP device sets; the property appears in
Pulseaudio UI only when the node is associated with a device.
Some functions need to wait for the reply of the server before they can
complete but the JACK API does not allow us to emit notifications while
blocking a function.
Delay emiting notifications when we are in selected methods and send a
notify to an eventfd to call the queued notifications.
Fixes#3183
The trigger operation decrements the activation count on a node and
signals the eventfd when 0.
Implement pw_stream_trigger_process() with this new function.
Make the 3 types of trigger operations on a stream more explicit.
trigger: -> do node_trigger()
driver/driving -> start graph with ready callback
other: -> emit request trigger event.
Don't call into the node process directly but use the eventfd to wake up
the node.
This is slightly slower and causes some change in behaviour
because we now need to go back to the poll loop and then let the node be
scheduled.
It is however nicer to have a uniform way to wake up nodes and it
opens up some new possibilities such as scheduling nodes in their own
threads on the server.
When flushing a capture stream we move all dequeued buffers to the
queued queue for recycling. The dequeued queue is filled again when
captured buffers become available. Otherwise we might never recycle
some dequeued buffers.
Don't go through the signal_func when we need to complete the graph
or when we need to process the node, do directly to process_node.
The signal_func is really only for nodes activating peers.
For client-nodes that use trigger, set the signal and wakeup time when
they start the server node. Also set finish time before we resume the
peers on the server.
Client-nodes should really resume the peers directly without going
through the server but this is something to improve later.
Currently, RAOP sinks referencing the same remote ip and port may be created multiple times:
One each for IPv4 and IPv6, times the number of network interfaces used for mDNS discovery.
A recent change added `(IPv4)`and `(IPv6)`identifiers to the sinks' pretty names, however that
is misleading, as often times the service advertised through an mDNSv6 record is actually an
IPv4 service (i.e. the IP reference contained in the IPv6 record may be an IPv4 address).
With this change, sink creation is skipped if a sink with the same advertised name already exists.
Determine application executable file so that the result can be trusted,
and the file exists in the current namespace.
Don't use /proc/pid/cmdline, since that contains whatever was specified
by the exec() call.
Only stop processing the ready callback if we are a driver and not currently
driving the graph.
Streams that use trigger will also emit the ready callback and are not
driving the graph (but also are not a driver) and should therefore be
allowed to continue to resume_node to schedule the peer nodes.
See #3184
The transport set volume call may take a long time or never complete, so
make them async to not block main loop.
Also reduce log level to info for the failed volume setting, as this is
something the user can do nothing about.
Stop our own data-loop and enter/iterate/leave it from the jack thread.
This runs all our nodes in the JACK thread and removes 2 context
switches (jack to and from pw thread).
We can possibly do this nicer by only pushing our own streams onto a
new custom data-loop but that's for later.
Make a node implementation and export it, just like we do for the
stream. This way we can use the node to implement set_active().
Tweak the draining logic like pw_stream.
When we're using the peaks resampler, allow resampling, even when it is
disabled in the config.
The peaks resampler is just for GUI and would not really change the
signal, so we can allow this.
Add a new JACK sink/source pair that translates to a single JACK
client.
The JACK playback port appears as PipeWire source and is processed
directly, synchronously, through the complete pipewire graph into
the PipeWire sink that is then made available on the JACK capture
ports.
Because all this happens in the same JACK cycle with no delay, the
latency is 0. A jack_iodelay on the JACK server has exactly the same
latency as the jack_iodelay on the PipeWire side.
The PipeWire sink and source are forced into the same rate and
buffer_size as the JACK server and can't dynamically change.
This only supports Audio for now.
When we consumed all the buffer data, don't clear all the fds but only
those that were already consumed in the message. It is possible that we
already have fds for the next message and we don't want to discard
those.
Fixes some intermittend memory map errors.
Calculate the stats at the start of the new cycle. The results will be
about the previous cycle but this gives more accurate results because
we can also include awake and finish times of remote nodes.
Make sure not to change the status of the activation in the ready event
so that we don't overwrite the status of the last cycle yet.
This means we can always set the AWAKE and awake_time, the remote node
might update it when triggered but that's ok.
After processing we can update the FINISHED state for non-remote nodes,
the remote nodes will update it after they complete the process
function.
Handle the update of the activation status before calling resume_node()
because we can call this when starting a cycle or when completing
a node.
Only set the AWAKE status and time in process_node when not exported or
not driving. For an exported driving driver, the server will have
already updated the values before it triggered our last process and then
completed the graph. If we update again in the client, the server will
read wrong values.
Because there is not really a way yet to get the finish time of the remote
driver the awake and finish times are too early. We might be able to fix
this later by making the stats at the start of the cycle from the
previous values.
Keep 2 extra variables to record the driver start and previos driver
start values. This way we can measure the period. This used to be done
with a little hack, using the finish_time of the driver, which was set
previously in resume_node().
For exported driving nodes, the TRIGGERED time is set in the remote-node
before it writes the eventfd to trigger the node_ready event. For
non-exported nodes, we need to set this ourselves.
For non-driver nodes that trigger node_ready, mean that they did an
async resume of the node. This means the node is finished and we can set
the finish_time accordingly.