config.h needs to be consistently included before any standard headers
if we ever want to set feature test macros (like _GNU_SOURCE or whatever)
inside. It can lead to hard-to-debug issues without that.
It can also be problematic just for our own HAVE_* that it may define
if it's not consistently made available before our own headers. Just
always include it first, before everything.
We already did this in many files, just not consistently.
Including C headers inside of `extern "C"` breaks use from C++. Hoist
the includes of standard C headers above the block so we don't try
to mangle the stdlib.
I initially tried to scope this with a targeted change but it's too
hard to do correctly that way. This way, we avoid whack-a-mole.
Firefox is working around this in their e21461b7b8b39cc31ba53c47d4f6f310c673ff2f
commit.
Bug: https://bugzilla.mozilla.org/1953080
Support the RFC 4695 sysex segmentation rules where a sysex packet can
be split into multiple chunks using the f0 and f7 patterns like:
begin f0 ... f0
continue f7 ... f0
end f7 ... f7
Add a unit test for the sysex UMP conversion.
It's not used anymore because it does work so well.
The problem is that while it transparently proxies param enums on
ports to peers, it fails to emit events when those peer
params change in a way that would make the enum result change as well.
This makes it quite hard to use this correctly.
Add macro SPA_CMP to do 3-way comparisons safely, and use it to avoid
signed integer overflows.
Fix also float/double comparisons (previously 0.1 == 0.8 since cast to
return type int).
Fix Id/Bool comparisons so they can return negative value.
Change the GenericFd data type to SyncObj. It's probably better to
explicitly state the data type than to make something generic. Otherwise
we would need to transfer the specific fd type somewhere else and there
is no room for that in the buffer and the the metadata is not a good idea
either because it can be modified and corrupted at runtime.
Add the SyncTimeline metadata. This contains 2 points on two timelines
(SyncObj datas in the buffer). The buffer can be accessed when the
acquire_point is signaled on the timeline and when the buffer
can be released, the release_point on the timeline should be signaled.
Add struct spa_error_location that holds information about some parsing
context such as the line and column number, error and line fragment
with the error.
Make spa_json_get_error() fill in the spa_error_location instead. Add
some error codes to the error state and use this to add a parsing reason
to the location.
Add a debug function to log the error location in a nice way. Also
add a FILE based debug context to log to any FILE.
Replace pw_properties_check_string() with
pw_properties_update_string_checked() and add
pw_properties_new_string_checked(). The check string behaviour can still
be done by setting props to NULL but the main purpose is to be able to
avoid parsing the json file twice in the future.
When using the old pw_properties_update_string(), log a warning to the
log when we fail to parse the complete string.
Use the new checked functions and the debug functions to report about
parsing errors in the tools and conf parsing.
This gives errors like:
```
> pw-loopback --playback-props '{ foo = [ f : g ] }'
error: syntax error in --playback-props: Invalid array separator
line: 1 | { foo = [ f : g ] }
col: 14 | ^
```
Check each object key is associated with value. Disallow object or
array valued keys.
Add flag tracking whether the parser is at global top-level or not, as
there we may either be in object context or in a single-value context.
Save depth=0 array flag bit in state, so that spa_json_next preserves
its complete state across calls. The higher-depth flag bits can be in
temporary stack as they are not needed across calls.
Control characters probably are an error. We also are not validating any
utf8 here, so disallow bare utf8 too --- one likely should use strings
for such content anyway as spaces are not allowed otherwise.
Disallow = and : as bare items in [] containers, as that likely is
"[ { foo = bar } ]" mistyped as "[ foo = bar ]".
Disallow nesting errors, eg. "[ foo bar" or "[ foo bar }".
Fix handling of ", \ and # in bare strings.
Fix ignoring trailing comments.
Add a fixed-size stack (128 levels) to the tokenizer, so that it can
check these at levels below its depth.
When the tokenizer encounters an error, make it and its parents enter
error state where no further input will be processed. This allows caller
to check for parse errors later as convenient.
The error state can be queried using spa_json_get_error, which also
looks up the error line/column position.
spa_json_parse_float/int receive non nul-terminated string, so calling
string functions assuming nul-termination is invalid.
Fix by copying data to a buffer before doing parsing.
The tag param has a list of arbitrary key/value pairs. Like the Latency
param, it travels up and downstream. Mixers will append the info
dictionaries or do some more fancy merging.
The purpose is to transport arbirary metadata, out-of-band, through the
graph and it's used for stream metadata and other stream properties.
`sd_journal_seek_tail()` is supposed to seek to the logical end of the journal,
i.e. (always) after the last entry. A call to `sd_journal_previous()` is needed
to seek to the last entry, so that `sd_journal_next()` can be called
successfully in `find_in_journal()`. Without it, the journal would always
stay at the end of the list of entries, so further `sd_journal_next()`
calls would fail as there are no entries after the last.
See:
* https://github.com/systemd/systemd/issues/25369
* https://github.com/systemd/systemd/pull/26577
Add a _fast callback function that skips the version and method check.
We can use this in places where performance is critical when we do the
check out of the critical loops.
Make all system methods _fast calls. We expect them to exist and have
the right version. If we add new versions we can make them slow.
Add support for PIPEWIRE_DEBUG=3,foo.bar:5 to set a global log level in
addition to specific topics.
Previously it would have to be done with *:3,foo.bar:5, which would not
really set a global level but all topics to the custom level of 3.
This metadata can be used to signal that a buffer is transformed.
The values are intentionally choosen to coincide with
wl_output::transform from the wayland windowsystem.
uint32_t i;
for (i = 0; i < SPA_N_ELEMENTS(some_array); i++)
.. stuff with some_array[i].foo ...
becomes:
SPA_FOR_EACH_ELEMENT_VAR(some_array, p)
.. stuff with p->foo ..
When we are already past the size of the buffer, don't bother calling
the overflow callback anymore, the buffer is already corrupted.
Otherwise it would be possible to have the overflow callback fail the
first time around, some data will be skipped, and then the next
overflow callback would succeed, giving the impression that all is
fine.
Add a unit test for this.
A lot of code calls spa_hook_remove() from error paths where the hook
and therefore the list may not have been initialized.
This leads to null-derefences.
Register a pthread cleanup handler to guarantee
that `spa_source::{priv, rmask}` are cleared even
if the thread is cancelled while the loop is dispatching.
This is necessary, otherwise `spa_source::priv` could point
to the stack of the cancelled thread, which will lead to
problems like this later:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f846b025be2 in detach_source (source=0x7f845f435f60) at ../spa/plugins/support/loop.c:144
144 e->data = NULL;
[Current thread is 1 (LWP 5274)]
(gdb) p e
$1 = (struct spa_poll_event *) 0x7f845e297820
(gdb) bt
#0 0x00007f846b025be2 in detach_source (source=0x7f845f435f60) at ../spa/plugins/support/loop.c:144
#1 0x00007f846b0276ad in free_source (s=0x7f845f435f60) at ../spa/plugins/support/loop.c:359
#2 0x00007f846b02a453 in loop_destroy_source (object=0x7f845f3af478, source=0x7f845f435f60) at ../spa/plugins/support/loop.c:786
#3 0x00007f846b02a886 in impl_clear (handle=0x7f845f3af478) at ../spa/plugins/support/loop.c:859
#4 0x00007f846b172f40 in unref_handle (handle=0x7f845f3af450) at ../src/pipewire/pipewire.c:211
#5 0x00007f846b173579 in pw_unload_spa_handle (handle=0x7f845f3af478) at ../src/pipewire/pipewire.c:346
#6 0x00007f846b15a761 in pw_loop_destroy (loop=0x7f845f434e30) at ../src/pipewire/loop.c:159
#7 0x00007f846b135d8e in pw_data_loop_destroy (loop=0x7f845f434cb0) at ../src/pipewire/data-loop.c:166
#8 0x00007f846b12c31c in pw_context_destroy (context=0x7f845f41c690) at ../src/pipewire/context.c:485
#9 0x00007f846b3ddf9e in jack_client_close (client=0x7f845f3c1030) at ../pipewire-jack/src/pipewire-jack.c:3481
...
We need exactly 4 hex characters, everything else is refused. We
also copy those characters directly to the output string without
assuming any encoding.
See #2337