Previously the pointer was determined as follows:
mm->this.ptr = SPA_PTROFF(m->ptr, range.start, void);
however, when `pw_map_range` is calculated, `pw_map_range::start` is the offset
from the beginning of the first page, starting at `pw_map_range::offset`.
This works correctly if `memblock_map()` runs because that will map the file
with expected offset, so using `range.start` is correct.
However, when a mapping is reused (i.e. `memblock_find_mapping()`) finds something,
then `range.start` is not necessarily correct. Consider the following example:
* page size is 10
* one memblock with size 20 (2 pages)
* the applications wants to mappings:
* (offset=5,size=10)
* (offset=15,size=5)
After the first request from the application, a `mapping` object is created
that covers the first two pages of the memblock: offset=0 and size=20. During
the second request, the calculated `pw_map_range` is as follows:
{ start = 5, offset = 10, size = 10 }
and the only previously created mapping is reused since (0 <= 5) and (10 <= 20). When
the pointer of the mapping is adjusted afterwards it will be incorrect since `m->ptr`
points to byte 0 on page 0 (instead of byte 0 on page 1 -- that is assumed). Thereforce
the two will unexpectedly overlap.
Fix that by using `offset - m->offset` when adjusting the mapping's pointer. Also move
the `range` variable into a smaller scope because it only makes sense there. And add
a test that check the above previously incorrect case.
Fixes: 2caf81c97c ("mem: improve memory handling")
Fixes#4884
Align RX of streams in same ISO group:
- Ensure all streams in ISO group have same target latency also for BAP
Client
- Determine rate matching to ISO group clock from RX times of all
streams in the group
- Based on this, compute nominal packet RX times, and feed them to
decode-buffer instead of the real RX time. This is enough for
sub-sample level sync.
- Customise buffer overrun handling for ISO so that it drops data to
arrive exactly at the target, for faster convergence at RX start
The ISO clock matching is done based on kernel-provided packet RX times,
so it has unknown offset from the actual ISO clock, probably a few ms.
Current kernels (6.17) do not provide anything better to use for the
clock matching, and doing it properly appears to be controller
vendor-defined (if possible at all).
Take resampler delay into account when computing the buffer fill level,
including the fractional part.
If decode-buffer is now fed nominal packet reference times in
write_packet(), it converges the total buffer + resampler latency to the
target at sub-sample accuracy.
This is needed for aligning RX of ISO streams in the same group, so that
e.g. stereo pair alignment is achieved even though the streams have
separate resamplers. Resampler phases get aligned via independent rate
matching.
The rate matching calculations are done in the system clock domain. If
the driver ticks at a different rate, the correction factor needs to be
adjusted by the rate_diff.
setup_matching() also needs to be before spa_bt_decode_buffer_process():
as follower we should use rate matching value calculated on the
*previous* cycle, because this is what driver is doing when it adjusts
it tick rate.
Based on testing, ALSA FireWire drivers introduce additional latency
determined by the buffer size.
Report that latency.
Pass device.bus to the node, so it can recognize firewire.
FireWire ALSA driver latency is determined by the buffer size and not the
period. Timer-based scheduling is then not really useful on these devices as
the latency is fixed.
In pro-audio profile, enable IRQ scheduling unconditionally for these
devices, so that controlling the latency works properly.
See #4785
Some devices (FireWire) fail to produce audio if period count is < 3,
and also have small buffer size. When quantum is too large, we might
then get too few periods and broken sound.
Set minimum for the period count in ALSA, to determine the maximum
period size we can use. If smaller than what we were going to use, round
down to power-of-2.
See #4785
With the removal of `SPA_DATA_MemPtr` support, this member is no longer used.
Fixes: b948ffdb25 ("spa: libcamera: source: remove `SPA_DATA_MemPtr` support")
On production systems, having a constant high latency is favored over
dynamically adjusting it in order to optimize for low latency,
because every time a dynamic adjustment happens, there's a glitch.
This adds an option to let the user specify the exact amount of latency
they want.
The hardcoded latency of 512/<rate> is quite low on some ALSA devices.
Instead of forcing that latency onto the graph, just don't set it at all
unless it originates from the BAP presentation delay. That means that
the functionality remains the same for BAP but changes for A2DP to favor
the preferred quantum of the ALSA sink (or whatever is the driver).
Also, avoid setting an empty string ("") latency and rate in the cases
where it's not defined. This allows users to override those properties
through the wireplumber monitor rules if they need to.
Currently the v4l2 and libcamera plugins map `SPA_PROP_exposure` in incompatible
ways. So change the v4l2 mapping to `V4L2_CID_EXPOSURE_ABSOLUTE` because at least
that is in units of time (a step closer to addressing #4697), and because that
is more relevant for UVC cameras.
Also change the pipewire-v4l2 translation layer.
Rework how the monitor mode works. Instead of having separate paths for
the list and monitor mode, reuse the list mode. We simply mark all
changes and then list the changes in a loop.
This makes it possible to accumulate some updates and print them
together.
Add a -t option to list the latency params on a port.
Just making a port and adding it to a node does not make it a working
port..
Make a new node function to get a new free port, this will actually call
the implementation spa_node_add_port(), which will add the new port
which we can then pick up and use.
See #4876
The Max latency property only works for timer based scheduling so that
we don't select a quantum larger than we can handle in our buffer.
With IRQ based scheduling this does not make sense because we will
reconfigure the buffer completely when we change quantums and so the
currently selected buffer size does not limit the latency in any way.
Fixes#4877
Remove the QUEUED flags to check if a buffer is in some queue.
Add a new flag to check if a buffer was dequeued by the application.
Check if the application only queues buffers with the DEQUEUED flag set.
The flag was used to see if a buffer was in a queue or not but that
doesn't really matter much and with the DEQUEUED flag we can only move
buffers from dequeued to queued.
When renegotiating stream parameters (e.g. size), the buffers
are cleared should no longer be queued back. Add a flag to detect this,
while logging a warning and erroring out when the user tries to queue
such a buffer.
Some drivers (Firewire) have a latency depending on the ALSA buffer size
instead of the period size.
In IRQ mode, we can safely use 2 (or 3 for batch devices) periods
because we always need to reconfigure the hardware when we want to
change the period and so we don't need to keep some headroom like we do
for timer based scheduling.
See #4785
Initialization of PipeWire could happen too early and deadlock in some
cases. Instead, initialize pipewire right before we're going to actually
use it for the first time.
Fixes#4859
Only send out SUSPENDED event when there is a change in the suspended
state. This avoids sending out unsuspend events when we simply uncork.
Implement the fail_on_suspend flag for capture and playback streams.
Instead of suspending those streams, we need to kill them.
This is required in order to allow plugins to use GL as mincore
is used in Mesas `_eglPointerIsDereferenceable()`.
One example for a client wanting to do so is the in-development
libcamera GPUISP, see https://patchwork.libcamera.org/cover/24183/