Make the parametric-equalizer module destroy the underlying filter-chain
module on destruction. This makes the EQ nodes get destroyed on unload.
Fixes#5045
WirePlumber recently added a mechanism to force mono mixdown on audio
outputs, which is a useful feature for accessibility. Let's also expose
that setting via libpulse for existing audio settings UIs to be able to
use.
Pipewire uses a rate of 256/7680 with the integrated camera of Apple
silicon Macbooks. To calculate pw_time.delay correctly in this case it
has to be divided by time->rate.num. Without this division the delay
contribution of the `((latency->min_ns + latency->max_ns) / 2)` term
ends up as 255 which are 8.5 seconds.
pipewiresrc reports the delay as latency in the gstreamer pipeline which
results in rendering a frame every 8.5 seconds.
I suspect the non-normalized rate of 256/7680 is another bug in
pipewire. The rate for an UVC webcam is reported as 1/30. Both
Video4Linux2 devices report a discrete frame interval of 0.033s (30fps).
Fixes#4957
(cherry picked from commit f03021edd1)
GST_SECOND * t.rate.num can turn into a negative gint, resulting in
assertions like:
_gst_util_uint64_scale_int: assertion 'num >= 0' failed
Just use the 64bit version instead.
(cherry picked from commit 77a5100280)
When we fire the timer event, mark the next timeout as NULL because
nothing else is going to timeout anymore until we rearm the timer.
This has the effect that if we cancel and add the same timer from the
callback that we will reprogram the timer with the new timeout instead
of thinking the item as already programmed.
Use the timer queue for scheduling stream and object data timeouts.
This avoids allocating timerfds for these timeouts and the timer queue
can handle many timeouts more efficiently.
If we don't get a link on a stream, we might never send a create stream
reply. The client handles this fine by timing out after 30s and dropping
the stream, but the server holds on to the pw_stream forever (or until
the client quits).
Let's add a timer to clean up such streams on the server.
Fixes: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/4901
Avoid shadowing some variables from the parent block.
The node of a target can be NULL when the target is running in another
instance. We already do some checks for this but make sure we never
deref the NULL pointer.
Fixes#4922
Add a flag to make_sdp to note if this should be a new SDP or a temp
SDP to compare to the existing one.
Move the update of session_id and hash to when we make a new SDP. This
way we also update session_id and hash when we make the first SDP.
This fixes the initial undefined SDP session id and hash.
Fixes#4852
Wireplumber loads the libcamera nodes into the pipewire server.
We need to remove the RestrictNamespaces option from the service file
to allow libcamera to load sandboxed IPA modules.
do_node_unprepare runs in both the server and the client when a node is
stopped. On the server size, set the status to FINISHED and trigger any
targets. This ensures the node will not be scheduled in this cycle
anymore. We have to do this because we can't know if the node is still
alive or not.
When the client receives the stop message, it will unprepare and set the
status to INACTIVE. This ensures the driver will no longer trigger the
node. If the server didn't already trigger the targets, do this in the
remote node then.
This avoid a race where both the client and the server are setting the
status and if the INACTIVE state is set by the server, it might stall
processing of the client.
Fixes#4840
Previously the pointer was determined as follows:
mm->this.ptr = SPA_PTROFF(m->ptr, range.start, void);
however, when `pw_map_range` is calculated, `pw_map_range::start` is the offset
from the beginning of the first page, starting at `pw_map_range::offset`.
This works correctly if `memblock_map()` runs because that will map the file
with expected offset, so using `range.start` is correct.
However, when a mapping is reused (i.e. `memblock_find_mapping()`) finds something,
then `range.start` is not necessarily correct. Consider the following example:
* page size is 10
* one memblock with size 20 (2 pages)
* the applications wants to mappings:
* (offset=5,size=10)
* (offset=15,size=5)
After the first request from the application, a `mapping` object is created
that covers the first two pages of the memblock: offset=0 and size=20. During
the second request, the calculated `pw_map_range` is as follows:
{ start = 5, offset = 10, size = 10 }
and the only previously created mapping is reused since (0 <= 5) and (10 <= 20). When
the pointer of the mapping is adjusted afterwards it will be incorrect since `m->ptr`
points to byte 0 on page 0 (instead of byte 0 on page 1 -- that is assumed). Thereforce
the two will unexpectedly overlap.
Fix that by using `offset - m->offset` when adjusting the mapping's pointer. Also move
the `range` variable into a smaller scope because it only makes sense there. And add
a test that check the above previously incorrect case.
Fixes: 2caf81c97c ("mem: improve memory handling")
Fixes#4884
Remove the QUEUED flags to check if a buffer is in some queue.
Add a new flag to check if a buffer was dequeued by the application.
Check if the application only queues buffers with the DEQUEUED flag set.
The flag was used to see if a buffer was in a queue or not but that
doesn't really matter much and with the DEQUEUED flag we can only move
buffers from dequeued to queued.
When renegotiating stream parameters (e.g. size), the buffers
are cleared should no longer be queued back. Add a flag to detect this,
while logging a warning and erroring out when the user tries to queue
such a buffer.
This is required in order to allow plugins to use GL as mincore
is used in Mesas `_eglPointerIsDereferenceable()`.
One example for a client wanting to do so is the in-development
libcamera GPUISP, see https://patchwork.libcamera.org/cover/24183/
When a link enters the "ERROR" state, it is scheduled for destruction in
`module-link-factory.c:link_state_changed()`, which queues `destroy_link()`
to be executed on the context's work queue.
However, if the link is destroyed by means of `pw_impl_link_destroy()`
directly after that, then `link_destroy()` unregisters the associated
`pw_global`'s event hook, resulting in `global_destroy()` not being called
when `pw_impl_link_destroy()` proceeds to call `pw_global_destroy()` some
time later. This causes the scheduled async work to not be cancelled. When
it runs later, it will trigger a use-after-free since the `link_data` object
is directly tied to the `pw_impl_link` object.
For example, if the link is destroyed when the client disconnects:
==259313==ERROR: AddressSanitizer: heap-use-after-free on address 0x7ce753028af0 at pc 0x7f475354a565 bp 0x7ffd71501930 sp 0x7ffd71501920
READ of size 8 at 0x7ce753028af0 thread T0
#0 0x7f475354a564 in destroy_link ../src/modules/module-link-factory.c:253
#1 0x7f475575a234 in process_work_queue ../src/pipewire/work-queue.c:67
#2 0x7b47504e7f24 in source_event_func ../spa/plugins/support/loop.c:1011
[...]
0x7ce753028af0 is located 1136 bytes inside of 1208-byte region [0x7ce753028680,0x7ce753028b38)
freed by thread T0 here:
#0 0x7f475631f79d in free /usr/src/debug/gcc/gcc/libsanitizer/asan/asan_malloc_linux.cpp:51
#1 0x7f4755594a44 in pw_impl_link_destroy ../src/pipewire/impl-link.c:1742
#2 0x7f475569dc11 in do_destroy_link ../src/pipewire/impl-port.c:1386
#3 0x7f47556a428b in pw_impl_port_for_each_link ../src/pipewire/impl-port.c:1673
#4 0x7f475569dc3e in pw_impl_port_unlink ../src/pipewire/impl-port.c:1392
#5 0x7f47556a02d8 in pw_impl_port_destroy ../src/pipewire/impl-port.c:1453
#6 0x7f4755634f79 in pw_impl_node_destroy ../src/pipewire/impl-node.c:2447
#7 0x7b474f722ba8 in client_node_resource_destroy ../src/modules/module-client-node/client-node.c:1253
#8 0x7f47556d7c6c in pw_resource_destroy ../src/pipewire/resource.c:325
#9 0x7f475545f07d in destroy_resource ../src/pipewire/impl-client.c:627
#10 0x7f47554550cd in pw_map_for_each ../src/pipewire/map.h:222
#11 0x7f4755460aa4 in pw_impl_client_destroy ../src/pipewire/impl-client.c:681
#12 0x7b474fb0658b in handle_client_error ../src/modules/module-protocol-native.c:471
[...]
Fix this by cancelling the work queue item in `link_destroy()`, which should
always run, regardless of the ordering of events.
Fixes#4691
Add support for FairPlay SAP v2.5 (encryption type 5) type devices such as Apple Home Pod Minis.
Apparently only these devices require the `POST /feedback` heartbeat, so fix that.
We also need to close the SynObj fd we got, just like we close any
DmaBuf or MemFd.
Make sure we get a compiler error when we add more items to the
data type enumeration later.
Fixes#4807
Reset buffers when deactivating to avoid having old data in the
ringbuffers, which also adds latency when activated again.
Clear sink_ready and capture_ready when resetting buffers to avoid
calling process() before there is new data to process.
capture and sink streams may start before playback stream so process()
may fail to dequeue a playback buffer. In that case advance the read
pointers to avoid building up latency in the ringbuffers.
Because we do the processing of the graph in the playback process
function, only do graph reset and reconfigure from the playback state
change so that we don't have process and state change at the same time
and crash.
When stream is paused, internal delay buffers were cleared, but some
data could stay in stream output queue. Without a flush, these data where
played in front of a new data.
Patch was inspired by 64d6ff4184 fixing the
same issue in a filter-chain module.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Combine stream selects the biggest latency from all output streams and sent
the latency upstream. To select the biggest latency, each stream needs to have
the sample rate and the quantum size set.
The combine stream recalculates the latency in the latency changed callback
or during data processing.
Stream sets the sample rate and the quantum size in a copy_position call
which is normally called during processing the output data or when state
changes to streaming.
Before this change, it wasn't guarantee the copy_position was called for
each stream already and latency in the combine stream was selected from
random stream.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>