Driver output streams will start the cycle with a _trigger() operation,
which will call the process function (if necessary) to dequeue/queue a
buffer before starting the graph cycle. At the end of the cycle, the
internal stream process function is called again to recycle any buffers
but we should not try to dequeue a new buffer (if there was any in the
queue) and say that we have data.
Do this by keeping track of when the internal process function was
called because of trigger or because of the end of the cycle. At the end
of the cycle, we can call the trigger_end() but we should not prepare a
new buffer on the output io.
Use the timer queue for scheduling stream and object data timeouts.
This avoids allocating timerfds for these timeouts and the timer queue
can handle many timeouts more efficiently.
If we don't get a link on a stream, we might never send a create stream
reply. The client handles this fine by timing out after 30s and dropping
the stream, but the server holds on to the pw_stream forever (or until
the client quits).
Let's add a timer to clean up such streams on the server.
Fixes: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/4901
Wireplumber loads the libcamera nodes into the pipewire server.
We need to remove the RestrictNamespaces option from the service file
to allow libcamera to load sandboxed IPA modules.
Add a port.exclusive flag and inherit the value from the node.exclusive
flag if not otherwise specified.
Make it so that exclusive ports can only be linked once. This is
important for explicit sync where there can be only one producer and one
consumer in order to signal the timeline objects correctly.
pw_stream now handles the other (output) latency for us, it will keep
the param and report it. If we are not interested in upstream latency we
don't have to parse and store it and we can just be concerned with the
latency we report on our input port (input latency).
Update the scheduling doc with some information about how async
scheduling works. Also add something about the latency.
Async links add 1 quantum of latency so take that into account when
aggregating latencies.
Also a source directly linked to an async node does not add latency
(we evaluate the tee before incrementing the cycle so that it effectively
is executed in the previous cycle and consumed immediately by async
nodes). We can do this because the driver source always provides data
before the async node, and never concurrently.
Add a listener to the link for the node driver change as well because
that can now influence the latency for async nodes.
do_node_unprepare runs in both the server and the client when a node is
stopped. On the server size, set the status to FINISHED and trigger any
targets. This ensures the node will not be scheduled in this cycle
anymore. We have to do this because we can't know if the node is still
alive or not.
When the client receives the stop message, it will unprepare and set the
status to INACTIVE. This ensures the driver will no longer trigger the
node. If the server didn't already trigger the targets, do this in the
remote node then.
This avoid a race where both the client and the server are setting the
status and if the INACTIVE state is set by the server, it might stall
processing of the client.
Fixes#4840
The docs say that a requested size of 0 can be returned and it means
that there is no suggestion for the size.
Make this so by decoupling the requested size value and the triggering
of the process callback. If we have no rate_match and no quantum
(because the driver didn't set it) we still want to schedule with a 0
requested size.
Previously the pointer was determined as follows:
mm->this.ptr = SPA_PTROFF(m->ptr, range.start, void);
however, when `pw_map_range` is calculated, `pw_map_range::start` is the offset
from the beginning of the first page, starting at `pw_map_range::offset`.
This works correctly if `memblock_map()` runs because that will map the file
with expected offset, so using `range.start` is correct.
However, when a mapping is reused (i.e. `memblock_find_mapping()`) finds something,
then `range.start` is not necessarily correct. Consider the following example:
* page size is 10
* one memblock with size 20 (2 pages)
* the applications wants to mappings:
* (offset=5,size=10)
* (offset=15,size=5)
After the first request from the application, a `mapping` object is created
that covers the first two pages of the memblock: offset=0 and size=20. During
the second request, the calculated `pw_map_range` is as follows:
{ start = 5, offset = 10, size = 10 }
and the only previously created mapping is reused since (0 <= 5) and (10 <= 20). When
the pointer of the mapping is adjusted afterwards it will be incorrect since `m->ptr`
points to byte 0 on page 0 (instead of byte 0 on page 1 -- that is assumed). Thereforce
the two will unexpectedly overlap.
Fix that by using `offset - m->offset` when adjusting the mapping's pointer. Also move
the `range` variable into a smaller scope because it only makes sense there. And add
a test that check the above previously incorrect case.
Fixes: 2caf81c97c ("mem: improve memory handling")
Fixes#4884
Rework how the monitor mode works. Instead of having separate paths for
the list and monitor mode, reuse the list mode. We simply mark all
changes and then list the changes in a loop.
This makes it possible to accumulate some updates and print them
together.
Add a -t option to list the latency params on a port.
Just making a port and adding it to a node does not make it a working
port..
Make a new node function to get a new free port, this will actually call
the implementation spa_node_add_port(), which will add the new port
which we can then pick up and use.
See #4876
Remove the QUEUED flags to check if a buffer is in some queue.
Add a new flag to check if a buffer was dequeued by the application.
Check if the application only queues buffers with the DEQUEUED flag set.
The flag was used to see if a buffer was in a queue or not but that
doesn't really matter much and with the DEQUEUED flag we can only move
buffers from dequeued to queued.
When renegotiating stream parameters (e.g. size), the buffers
are cleared should no longer be queued back. Add a flag to detect this,
while logging a warning and erroring out when the user tries to queue
such a buffer.
Only send out SUSPENDED event when there is a change in the suspended
state. This avoids sending out unsuspend events when we simply uncork.
Implement the fail_on_suspend flag for capture and playback streams.
Instead of suspending those streams, we need to kill them.
This is required in order to allow plugins to use GL as mincore
is used in Mesas `_eglPointerIsDereferenceable()`.
One example for a client wanting to do so is the in-development
libcamera GPUISP, see https://patchwork.libcamera.org/cover/24183/
Add sample limit switch -n to pw-cat to stop the recording or playback
after a set number of samples received.
Change-Id: Iaa551db9849acd6acdb6897dbfaa92a21afa1312
The started boolean is insufficient to fully cover the possible internal
states. For this reason, it needs to be replaced by an enum that covers
these states.
Also, due to potential access by both the dataloop and the mainloop,
access to that internal state needs to be synchronized.
Finally, a variable "internal_state" makes for code that is easier to
read, since it emphasizes that this is state that is fully internal
inside the stream (and is not visible to the rtp-sink and rtp-source
modules for example).
The state_changed callbacks fulfill multiple roles, which is both a problem
regarding separation of concerns and regarding code clarity. De facto,
these callbacks cover error reporting, opening connections, and closing
connection, all in one, depending on a state that is arguably an internal
stream detail. The code in these callbacks tie these internal states to
assumptions that opening/closing callbacks is directly tied to specific
state changes in a common way, which is not always true. For example,
stopping the stream may not _actually_ stop it if a background send timer
is still running.
The notion of a "state_changed" callback is also problematic because the
pw_streams that are used in rtp-sink and rtp-source also have a callback
for state changes, causing confusion.
Solve this by replacing state_changed with three new callbacks:
1. report_error : Used for reporting nonrecoverable errors to the caller.
Note that currently, no one does such error reporting, but the feature
does exist, so this callback is introduced to preserve said feature.
2. open_connection : Used for opening a connection. Its optional return
value informs about success or failure.
3. close_connection : Used for opening a connection. Its optional return
value informs about success or failure.
Importantly, these callbacks do not export any internal stream state. This
improves encapsulation, and also makes it possible to invoke these
callbacks in situations that may not neatly map to a state change. One
example could be to close the connection as part of a stream_start call
to close any connection(s) left over from a previous run. (Followup commits
will in fact introduce such measures.)
Downstream elements accessing dmabufs from the CPU currently need to map
and unmap buffers on every frame.
While the kernel's page cache avoids most overhead, this still requires a
round-trip through the kernel and possibly non-negligible work in
`dma_buf_mmap()`. Keeping the buffers mapped avoids that without causing
additional syncronization work, as the later should only happen when
`DMA_BUF_IOCTL_SYNC` is triggered in `gst_dmabuf_mem_map()`.
A common scenario where this matters is clients using cameras. The
downstream elements in question may not be aware of dmabufs - e.g.
`videoconvert` - or fail to import the dmabuf and fall back to import
from memory - e.g. `glupload`.
Notes:
- GstShmAllocator implicitly does this already.
- We could also do this in the MemFd case, however I'm less convinced
about the trade-offs.
When a link enters the "ERROR" state, it is scheduled for destruction in
`module-link-factory.c:link_state_changed()`, which queues `destroy_link()`
to be executed on the context's work queue.
However, if the link is destroyed by means of `pw_impl_link_destroy()`
directly after that, then `link_destroy()` unregisters the associated
`pw_global`'s event hook, resulting in `global_destroy()` not being called
when `pw_impl_link_destroy()` proceeds to call `pw_global_destroy()` some
time later. This causes the scheduled async work to not be cancelled. When
it runs later, it will trigger a use-after-free since the `link_data` object
is directly tied to the `pw_impl_link` object.
For example, if the link is destroyed when the client disconnects:
==259313==ERROR: AddressSanitizer: heap-use-after-free on address 0x7ce753028af0 at pc 0x7f475354a565 bp 0x7ffd71501930 sp 0x7ffd71501920
READ of size 8 at 0x7ce753028af0 thread T0
#0 0x7f475354a564 in destroy_link ../src/modules/module-link-factory.c:253
#1 0x7f475575a234 in process_work_queue ../src/pipewire/work-queue.c:67
#2 0x7b47504e7f24 in source_event_func ../spa/plugins/support/loop.c:1011
[...]
0x7ce753028af0 is located 1136 bytes inside of 1208-byte region [0x7ce753028680,0x7ce753028b38)
freed by thread T0 here:
#0 0x7f475631f79d in free /usr/src/debug/gcc/gcc/libsanitizer/asan/asan_malloc_linux.cpp:51
#1 0x7f4755594a44 in pw_impl_link_destroy ../src/pipewire/impl-link.c:1742
#2 0x7f475569dc11 in do_destroy_link ../src/pipewire/impl-port.c:1386
#3 0x7f47556a428b in pw_impl_port_for_each_link ../src/pipewire/impl-port.c:1673
#4 0x7f475569dc3e in pw_impl_port_unlink ../src/pipewire/impl-port.c:1392
#5 0x7f47556a02d8 in pw_impl_port_destroy ../src/pipewire/impl-port.c:1453
#6 0x7f4755634f79 in pw_impl_node_destroy ../src/pipewire/impl-node.c:2447
#7 0x7b474f722ba8 in client_node_resource_destroy ../src/modules/module-client-node/client-node.c:1253
#8 0x7f47556d7c6c in pw_resource_destroy ../src/pipewire/resource.c:325
#9 0x7f475545f07d in destroy_resource ../src/pipewire/impl-client.c:627
#10 0x7f47554550cd in pw_map_for_each ../src/pipewire/map.h:222
#11 0x7f4755460aa4 in pw_impl_client_destroy ../src/pipewire/impl-client.c:681
#12 0x7b474fb0658b in handle_client_error ../src/modules/module-protocol-native.c:471
[...]
Fix this by cancelling the work queue item in `link_destroy()`, which should
always run, regardless of the ordering of events.
Fixes#4691
Improve the spa_ump_to_midi function so that it can consume multiple UMP
messages and produce multiple midi messages.
Some UMP messages (like program changes) need to be translated into up
to 3 midi messages. Do this byt adding a state to the function and by
making it consume the input bytes, just like the spa_ump_from_midi
function.
Adapt code to this new world. This is a little API break..