We can remove most of the special async handling in adapter, filter and
stream because this is now handled in the core.
Add a node.data-loop property to assign the node to a named data-loop.
Assign the non-rt stream and filter to the main loop. This means that
the node fd will be added to the main-loop and will be woken up directly
without having to wake up the RT thread and invoke the process callback
in the main-loop first. Because non-RT implies async, we can do all of
this like we do our rt processing because the output will only be used
in the next cycle.
When node.async is set, make the node async.
Advertize SPA_IO_AsyncBuffers on mixer ports when supported. Set a new
port flag when AsyncBuffer is supported on the port.
When making a link and if one of the nodes is async and the linked ports
support AsyncBuffer, make the link async and set this as a property on
the link. For async nodes we will use SPA_IO_AsyncBuffers on the mixer
ports.
Nodes that are async will not increment the peer required counters. This
ensures that the peer can start immediately before the async node is
ready.
On an async link, writers will write to the (cycle+1 & 1) async buffers
entry and readers will read from (cycle & 1). This makes the readers read
from the previously filled area.
We need to have two very controlled areas with specific rules for who
reads and who writes where because the two nodes will run concurrently
and no special synchronization is possible otherwise.
These async nodes can be paused and blocked without blocking or xrunning
the rest of graph. If the node didn't produce anything when the next
cycle starts, the graph will run with silence.
See #3509
This is to iterate params that are common to all ports, such as
EnumFormat or the supported IO areas. Mostly interesting for mixer and
splitter nodes so that we don't have to create a new port just to query
things.
spa_loop_invoke from data loop to main loop is not OK, as Wireplumber
currently runs its main loop with "pw_loop_enter(); pw_loop_iterate();
pw_loop_leave();" which causes the loop to be entered only when it is
processing an event.
In this case, part of the time the loop impl->thread==0, and calling
spa_loop_invoke() at such time causes the callback to be run from the
current thread, ie. in this case data loop which must not happen here.
Fix this by using eventfd instead, which is safe as the callback always
runs from the main loop.
Eventfd is also slightly more natural here, as multiple events will
group to the same mainloop cycle.
Use TX timestamps to get accurate reading of queue length and latency on
kernel + controller side.
This is new kernel BT feature, so requires kernel with the necessary
patches, available currently only in bluetooth-next/master branch.
Enabling Poll Errqueue kernel experimental Bluetooth feature is also
required for this.
Use the latency information to mitigate controller issues where ISO
streams are desynchronized due to tx problems or spontaneously when some
packets that should have been sent are left sitting in the queue, and
transmission is off by a multiple of the ISO interval. This state is
visible in the latency information, so if we see streams in a group have
persistently different latencies, drop packets to resynchronize them.
Also make corrections if the kernel/controller queues get too long, so
that we don't have too big latency there.
Since BlueZ watches the same socket for errors, and TX timestamps arrive
via the socket error queue, we need to set BT_POLL_ERRQUEUE in addition
to SO_TIMESTAMPING so that BlueZ doesn't think TX timestamps are errors.
Link: https://github.com/bluez/bluez/issues/515
Link: https://lore.kernel.org/linux-bluetooth/cover.1710440392.git.pav@iki.fi/
Link: https://lore.kernel.org/linux-bluetooth/f57e065bb571d633f811610d273711c7047af335.1712499936.git.pav@iki.fi/
This patch fixes use case, when disable_tsched is set and
api.alsa.period-size is set to value different from default quantum size.
In a such configuration, threshold needs to be set to a final value
before snd_pcm_sw_params_set_avail_min is called to get IRQs with
right timing.
Avail minimum is calculated from a threshold set in the check_position_config.
The method returned different value for threshold right before playback
started and after the playback started. Therefore threshold used in
the snd_pcm_sw_params_set_avail_min was incorrect.
Force the check_position_config to use configured values when called
from spa_alsa_prepare as this method is called when starting new playback
and the state->period_frames and the state->rate are already known.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Without this change, playback_ready or capture_ready was called
immediately after spa_alsa_start even tho start-delay was set.
Ready function was called with not precise "nsec" value, as "nsec"
plus latency should return time when the next buffer should be played
which wasn't true as start-delay was not included.
Now the playback is started immediately when the start_delay is set.
The alsa_do_wakeup_work is still called immediately but two things can
happened. Either start-delay is smaller then max_error and *_ready
function is called immediately, or start-delay is bigger then max_error
and state->next_time will be updated to correct value.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Alsa needs to call handler soon enough to have headroom plus threshold
frames in the buffer and not only threshold left.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Headroom are extra samples available in alsa buffer on top of a threshold.
Its use to prefill alsa buffer with silence before the playback starts
and later its use to calculate target number of a frames in the alsa buffer
when get_status is called. Target is calculated as headroom plus
threshold, which should be smaller then buffer size to make sense.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Signed-off-by: Carlos Rafael Giani <crg7475@mailbox.org>
A spa_node has callback both on the main and data thread, which can
modify the internal state of vulkan_blit_state. Especially critical are
functions, which might drop buffers currently in use. To mitigate this
a read-write lock is used. The data thread shall try to aquire a read
lock before accessing the buffers, while the main thread has to aquire
exclusive access via a write lock before modifying the buffers.
Add a SPA_PARAM_BUFFERS_metaType in the Buffers object. This contains a
bitmask of the mandatory metadata items that should be included on a
buffer when using this Buffers param.
Make the buffer allocation logic skip over the Buffers params that
require unavailable metadata.
This can be used to, for example, enforce specific metadata to describe
extra buffer memory (such as the meaning of generic file descriptors).
One such use is the explicit sync, where an extra buffer data is needed
for the sync fd along with metadata that contains the sync_point.
BlueZ API as BAP Server gives us the ISO interval, instead of the SDU
interval, in the MediaTransport.QoS.Interval property. They are not
necessarily the same. What we need is the SDU interval.
The SDU interval is the interval between packets the encoder outputs, so
it is determined by the codec configuration, and for LC3 is equal to the
frame duration.
Add codec method get_interval() that returns the correct interval, and
use it in iso-io.
Like the location, the orientation is a static property of libcamera
devices. While the rotation is already exposed as buffer transform,
knowing the property can be handy for applications in various ways.
See also: cd8ac5c1a ("libcamera: add camera location property on nodes")
A quite big number of UVC cameras - due to firmware or kernel driver
issues - have bad timestamps of the first frame, confusing clients
like pipewiresrc.
Drop the first frame, as this seems to be the most reliable workaround
for the time being.
Closes https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/3910
Add struct spa_error_location that holds information about some parsing
context such as the line and column number, error and line fragment
with the error.
Make spa_json_get_error() fill in the spa_error_location instead. Add
some error codes to the error state and use this to add a parsing reason
to the location.
Add a debug function to log the error location in a nice way. Also
add a FILE based debug context to log to any FILE.
Replace pw_properties_check_string() with
pw_properties_update_string_checked() and add
pw_properties_new_string_checked(). The check string behaviour can still
be done by setting props to NULL but the main purpose is to be able to
avoid parsing the json file twice in the future.
When using the old pw_properties_update_string(), log a warning to the
log when we fail to parse the complete string.
Use the new checked functions and the debug functions to report about
parsing errors in the tools and conf parsing.
This gives errors like:
```
> pw-loopback --playback-props '{ foo = [ f : g ] }'
error: syntax error in --playback-props: Invalid array separator
line: 1 | { foo = [ f : g ] }
col: 14 | ^
```