Previously the pointer was determined as follows:
mm->this.ptr = SPA_PTROFF(m->ptr, range.start, void);
however, when `pw_map_range` is calculated, `pw_map_range::start` is the offset
from the beginning of the first page, starting at `pw_map_range::offset`.
This works correctly if `memblock_map()` runs because that will map the file
with expected offset, so using `range.start` is correct.
However, when a mapping is reused (i.e. `memblock_find_mapping()`) finds something,
then `range.start` is not necessarily correct. Consider the following example:
* page size is 10
* one memblock with size 20 (2 pages)
* the applications wants to mappings:
* (offset=5,size=10)
* (offset=15,size=5)
After the first request from the application, a `mapping` object is created
that covers the first two pages of the memblock: offset=0 and size=20. During
the second request, the calculated `pw_map_range` is as follows:
{ start = 5, offset = 10, size = 10 }
and the only previously created mapping is reused since (0 <= 5) and (10 <= 20). When
the pointer of the mapping is adjusted afterwards it will be incorrect since `m->ptr`
points to byte 0 on page 0 (instead of byte 0 on page 1 -- that is assumed). Thereforce
the two will unexpectedly overlap.
Fix that by using `offset - m->offset` when adjusting the mapping's pointer. Also move
the `range` variable into a smaller scope because it only makes sense there. And add
a test that check the above previously incorrect case.
Fixes: 2caf81c97c ("mem: improve memory handling")
Fixes#4884
Improve the spa_ump_to_midi function so that it can consume multiple UMP
messages and produce multiple midi messages.
Some UMP messages (like program changes) need to be translated into up
to 3 midi messages. Do this byt adding a state to the function and by
making it consume the input bytes, just like the spa_ump_from_midi
function.
Adapt code to this new world. This is a little API break..
Check if the "TracerPid" field in `/proc/self/status` is non-zero. This does
not need libcap, and does not depend on capabilities, and it also works under
qemu user emulation.
Set LD_LIBRARY_PATH to have processes spawned by test load the right
library.
Mark openal-info test skipped if ASAN is enabled. The problem is that
openal-info would need LD_PRELOAD to work properly then. Just disable
the test then.
Using the parser for the spa_pod_sequence in the data buffers is
required in order to safely read the pods while there could be
concurrent writes.
See #4822
Use get_string() to get a pointer to the string in the pod so that we
also check if it has a 0 terminator.
Fix the test case now that is_string returns true even for non
zero-terminated strings.
One issues is that the `systemd-{system,user}-service` feature options
do not anything without the `systemd` option. This makes it more
complicated to arrive at the desired build configuration since there
are 3^3 = 27 possible ways to set each of them, but if `systemd=disabled`,
then the other two are just ignored.
Secondly, the `systemd` option also influences whether or not libsystemd
will be used or not. This is not strictly necessary, since the "systemd"
and "libsystemd" pkg-config files might be split, and one might wish to
disable any kind of service file generation, but use libsystemd.
Solve the first issues by using the `systemd-{system,user}-service` options
when looking up the "systemd" dependency for generating service files. This
means that the corresponding option is in full control, no secondary options
are necessary. This means that the "systemd" dependency is looked up potentially
twice, but that should not be a significant issue since meson caches dependecy
lookups.
And solve the second issue by renaming the now unused `systemd` option to
`libsystemd` and using it solely to control whether or not libsystemd will
be used.
Furthermore, the default value of `systemd-user-service` is set to "auto" to
prevent the dependency lookup from failing on non-systemd systemd out of
the box. And the journal tests in "test-support" are extended to return "skip"
if `sd_journal_open()` returns `ENOSYS`, which is needed because "elogind"
ships the systemd pkg-config files and headers.
Make a SPA_POD_ALIGN = 8 and make sure all pods are aligned to it. Use
the new constant to pad and check alignment. Make some new macros to
check for the pod type, alignment and minimal size.
config.h needs to be consistently included before any standard headers
if we ever want to set feature test macros (like _GNU_SOURCE or whatever)
inside. It can lead to hard-to-debug issues without that.
It can also be problematic just for our own HAVE_* that it may define
if it's not consistently made available before our own headers. Just
always include it first, before everything.
We already did this in many files, just not consistently.
Including C headers inside of `extern "C"` breaks use from C++. Hoist
the includes of standard C headers above the block so we don't try
to mangle the stdlib.
I initially tried to scope this with a targeted change but it's too
hard to do correctly that way. This way, we avoid whack-a-mole.
Firefox is working around this in their e21461b7b8b39cc31ba53c47d4f6f310c673ff2f
commit.
Bug: https://bugzilla.mozilla.org/1953080
Support the RFC 4695 sysex segmentation rules where a sysex packet can
be split into multiple chunks using the f0 and f7 patterns like:
begin f0 ... f0
continue f7 ... f0
end f7 ... f7
Add a unit test for the sysex UMP conversion.
It's not used anymore because it does work so well.
The problem is that while it transparently proxies param enums on
ports to peers, it fails to emit events when those peer
params change in a way that would make the enum result change as well.
This makes it quite hard to use this correctly.
Add macro SPA_CMP to do 3-way comparisons safely, and use it to avoid
signed integer overflows.
Fix also float/double comparisons (previously 0.1 == 0.8 since cast to
return type int).
Fix Id/Bool comparisons so they can return negative value.
Change the GenericFd data type to SyncObj. It's probably better to
explicitly state the data type than to make something generic. Otherwise
we would need to transfer the specific fd type somewhere else and there
is no room for that in the buffer and the the metadata is not a good idea
either because it can be modified and corrupted at runtime.
Add the SyncTimeline metadata. This contains 2 points on two timelines
(SyncObj datas in the buffer). The buffer can be accessed when the
acquire_point is signaled on the timeline and when the buffer
can be released, the release_point on the timeline should be signaled.
Add struct spa_error_location that holds information about some parsing
context such as the line and column number, error and line fragment
with the error.
Make spa_json_get_error() fill in the spa_error_location instead. Add
some error codes to the error state and use this to add a parsing reason
to the location.
Add a debug function to log the error location in a nice way. Also
add a FILE based debug context to log to any FILE.
Replace pw_properties_check_string() with
pw_properties_update_string_checked() and add
pw_properties_new_string_checked(). The check string behaviour can still
be done by setting props to NULL but the main purpose is to be able to
avoid parsing the json file twice in the future.
When using the old pw_properties_update_string(), log a warning to the
log when we fail to parse the complete string.
Use the new checked functions and the debug functions to report about
parsing errors in the tools and conf parsing.
This gives errors like:
```
> pw-loopback --playback-props '{ foo = [ f : g ] }'
error: syntax error in --playback-props: Invalid array separator
line: 1 | { foo = [ f : g ] }
col: 14 | ^
```
Check each object key is associated with value. Disallow object or
array valued keys.
Add flag tracking whether the parser is at global top-level or not, as
there we may either be in object context or in a single-value context.
Save depth=0 array flag bit in state, so that spa_json_next preserves
its complete state across calls. The higher-depth flag bits can be in
temporary stack as they are not needed across calls.
Control characters probably are an error. We also are not validating any
utf8 here, so disallow bare utf8 too --- one likely should use strings
for such content anyway as spaces are not allowed otherwise.
Disallow = and : as bare items in [] containers, as that likely is
"[ { foo = bar } ]" mistyped as "[ foo = bar ]".
Disallow nesting errors, eg. "[ foo bar" or "[ foo bar }".
Fix handling of ", \ and # in bare strings.
Fix ignoring trailing comments.
Add a fixed-size stack (128 levels) to the tokenizer, so that it can
check these at levels below its depth.
When the tokenizer encounters an error, make it and its parents enter
error state where no further input will be processed. This allows caller
to check for parse errors later as convenient.
The error state can be queried using spa_json_get_error, which also
looks up the error line/column position.
spa_json_parse_float/int receive non nul-terminated string, so calling
string functions assuming nul-termination is invalid.
Fix by copying data to a buffer before doing parsing.
The tag param has a list of arbitrary key/value pairs. Like the Latency
param, it travels up and downstream. Mixers will append the info
dictionaries or do some more fancy merging.
The purpose is to transport arbirary metadata, out-of-band, through the
graph and it's used for stream metadata and other stream properties.
`sd_journal_seek_tail()` is supposed to seek to the logical end of the journal,
i.e. (always) after the last entry. A call to `sd_journal_previous()` is needed
to seek to the last entry, so that `sd_journal_next()` can be called
successfully in `find_in_journal()`. Without it, the journal would always
stay at the end of the list of entries, so further `sd_journal_next()`
calls would fail as there are no entries after the last.
See:
* https://github.com/systemd/systemd/issues/25369
* https://github.com/systemd/systemd/pull/26577
Add a _fast callback function that skips the version and method check.
We can use this in places where performance is critical when we do the
check out of the critical loops.
Make all system methods _fast calls. We expect them to exist and have
the right version. If we add new versions we can make them slow.
Add support for PIPEWIRE_DEBUG=3,foo.bar:5 to set a global log level in
addition to specific topics.
Previously it would have to be done with *:3,foo.bar:5, which would not
really set a global level but all topics to the custom level of 3.
This metadata can be used to signal that a buffer is transformed.
The values are intentionally choosen to coincide with
wl_output::transform from the wayland windowsystem.
uint32_t i;
for (i = 0; i < SPA_N_ELEMENTS(some_array); i++)
.. stuff with some_array[i].foo ...
becomes:
SPA_FOR_EACH_ELEMENT_VAR(some_array, p)
.. stuff with p->foo ..