spa_loop_invoke from data loop to main loop is not OK, as Wireplumber
currently runs its main loop with "pw_loop_enter(); pw_loop_iterate();
pw_loop_leave();" which causes the loop to be entered only when it is
processing an event.
In this case, part of the time the loop impl->thread==0, and calling
spa_loop_invoke() at such time causes the callback to be run from the
current thread, ie. in this case data loop which must not happen here.
Fix this by using eventfd instead, which is safe as the callback always
runs from the main loop.
Eventfd is also slightly more natural here, as multiple events will
group to the same mainloop cycle.
Use TX timestamps to get accurate reading of queue length and latency on
kernel + controller side.
This is new kernel BT feature, so requires kernel with the necessary
patches, available currently only in bluetooth-next/master branch.
Enabling Poll Errqueue kernel experimental Bluetooth feature is also
required for this.
Use the latency information to mitigate controller issues where ISO
streams are desynchronized due to tx problems or spontaneously when some
packets that should have been sent are left sitting in the queue, and
transmission is off by a multiple of the ISO interval. This state is
visible in the latency information, so if we see streams in a group have
persistently different latencies, drop packets to resynchronize them.
Also make corrections if the kernel/controller queues get too long, so
that we don't have too big latency there.
Since BlueZ watches the same socket for errors, and TX timestamps arrive
via the socket error queue, we need to set BT_POLL_ERRQUEUE in addition
to SO_TIMESTAMPING so that BlueZ doesn't think TX timestamps are errors.
Link: https://github.com/bluez/bluez/issues/515
Link: https://lore.kernel.org/linux-bluetooth/cover.1710440392.git.pav@iki.fi/
Link: https://lore.kernel.org/linux-bluetooth/f57e065bb571d633f811610d273711c7047af335.1712499936.git.pav@iki.fi/
This patch fixes use case, when disable_tsched is set and
api.alsa.period-size is set to value different from default quantum size.
In a such configuration, threshold needs to be set to a final value
before snd_pcm_sw_params_set_avail_min is called to get IRQs with
right timing.
Avail minimum is calculated from a threshold set in the check_position_config.
The method returned different value for threshold right before playback
started and after the playback started. Therefore threshold used in
the snd_pcm_sw_params_set_avail_min was incorrect.
Force the check_position_config to use configured values when called
from spa_alsa_prepare as this method is called when starting new playback
and the state->period_frames and the state->rate are already known.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Without this change, playback_ready or capture_ready was called
immediately after spa_alsa_start even tho start-delay was set.
Ready function was called with not precise "nsec" value, as "nsec"
plus latency should return time when the next buffer should be played
which wasn't true as start-delay was not included.
Now the playback is started immediately when the start_delay is set.
The alsa_do_wakeup_work is still called immediately but two things can
happened. Either start-delay is smaller then max_error and *_ready
function is called immediately, or start-delay is bigger then max_error
and state->next_time will be updated to correct value.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Alsa needs to call handler soon enough to have headroom plus threshold
frames in the buffer and not only threshold left.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Headroom are extra samples available in alsa buffer on top of a threshold.
Its use to prefill alsa buffer with silence before the playback starts
and later its use to calculate target number of a frames in the alsa buffer
when get_status is called. Target is calculated as headroom plus
threshold, which should be smaller then buffer size to make sense.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Signed-off-by: Carlos Rafael Giani <crg7475@mailbox.org>
A spa_node has callback both on the main and data thread, which can
modify the internal state of vulkan_blit_state. Especially critical are
functions, which might drop buffers currently in use. To mitigate this
a read-write lock is used. The data thread shall try to aquire a read
lock before accessing the buffers, while the main thread has to aquire
exclusive access via a write lock before modifying the buffers.
Add a SPA_PARAM_BUFFERS_metaType in the Buffers object. This contains a
bitmask of the mandatory metadata items that should be included on a
buffer when using this Buffers param.
Make the buffer allocation logic skip over the Buffers params that
require unavailable metadata.
This can be used to, for example, enforce specific metadata to describe
extra buffer memory (such as the meaning of generic file descriptors).
One such use is the explicit sync, where an extra buffer data is needed
for the sync fd along with metadata that contains the sync_point.
BlueZ API as BAP Server gives us the ISO interval, instead of the SDU
interval, in the MediaTransport.QoS.Interval property. They are not
necessarily the same. What we need is the SDU interval.
The SDU interval is the interval between packets the encoder outputs, so
it is determined by the codec configuration, and for LC3 is equal to the
frame duration.
Add codec method get_interval() that returns the correct interval, and
use it in iso-io.
Like the location, the orientation is a static property of libcamera
devices. While the rotation is already exposed as buffer transform,
knowing the property can be handy for applications in various ways.
See also: cd8ac5c1a ("libcamera: add camera location property on nodes")
A quite big number of UVC cameras - due to firmware or kernel driver
issues - have bad timestamps of the first frame, confusing clients
like pipewiresrc.
Drop the first frame, as this seems to be the most reliable workaround
for the time being.
Closes https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/3910
Add struct spa_error_location that holds information about some parsing
context such as the line and column number, error and line fragment
with the error.
Make spa_json_get_error() fill in the spa_error_location instead. Add
some error codes to the error state and use this to add a parsing reason
to the location.
Add a debug function to log the error location in a nice way. Also
add a FILE based debug context to log to any FILE.
Replace pw_properties_check_string() with
pw_properties_update_string_checked() and add
pw_properties_new_string_checked(). The check string behaviour can still
be done by setting props to NULL but the main purpose is to be able to
avoid parsing the json file twice in the future.
When using the old pw_properties_update_string(), log a warning to the
log when we fail to parse the complete string.
Use the new checked functions and the debug functions to report about
parsing errors in the tools and conf parsing.
This gives errors like:
```
> pw-loopback --playback-props '{ foo = [ f : g ] }'
error: syntax error in --playback-props: Invalid array separator
line: 1 | { foo = [ f : g ] }
col: 14 | ^
```
Check for JSON parse errors, and log error messages as appropriate.
It's mostly enough to do this where the input is parsed for the first
time, e.g. via pw_properties_new_string, as that already validates the
JSON syntax.
Support reading from stdin and report syntax errors.
Also don't do extra spa_json_enter when inserting a dummy "{" when
parsing files with top-level keys. In this case the tokenizer is
already "entered" after spa_json_init, and will give parse error when we
are not inserting the closing "}" for the dummy "{".
Check each object key is associated with value. Disallow object or
array valued keys.
Add flag tracking whether the parser is at global top-level or not, as
there we may either be in object context or in a single-value context.
Save depth=0 array flag bit in state, so that spa_json_next preserves
its complete state across calls. The higher-depth flag bits can be in
temporary stack as they are not needed across calls.
Control characters probably are an error. We also are not validating any
utf8 here, so disallow bare utf8 too --- one likely should use strings
for such content anyway as spaces are not allowed otherwise.
Successful return is always >= 2 since it includes {} or [], so use 0 to
indicate error.
Since there's existing code that doesn't check the return value, it's
better to use 0 for errors as it'll likely to just lead to producing an
empty string if the value is not checked.
Disallow = and : as bare items in [] containers, as that likely is
"[ { foo = bar } ]" mistyped as "[ foo = bar ]".
Disallow nesting errors, eg. "[ foo bar" or "[ foo bar }".
Fix handling of ", \ and # in bare strings.
Fix ignoring trailing comments.
Add a fixed-size stack (128 levels) to the tokenizer, so that it can
check these at levels below its depth.
When the tokenizer encounters an error, make it and its parents enter
error state where no further input will be processed. This allows caller
to check for parse errors later as convenient.
The error state can be queried using spa_json_get_error, which also
looks up the error line/column position.
spa_json_parse_float/int receive non nul-terminated string, so calling
string functions assuming nul-termination is invalid.
Fix by copying data to a buffer before doing parsing.