Don't just blindly mmap the buffer but only when the data pointer
is NULL. If it was mapped already by the peer or the adapter or the
buffer allocation, we don't want to mmap it again and override the buffer
data pointer.
Also mmap with the permissions on the data. There is not much point in
limiting the permissions for an input port (to read only). We could do
this but then we would not be allowed to modify the existing data
pointer. The problem is that when the stream mmaps the data as READ only
and set the data pointer, if it is then handed to the mixer, it would
assume it is mapped with the permissions and then segfault when it
tries to write to the memory. It's just better to only mmap when the
data is NULL.
We don't need to do this ourselves, the MAP_BUFFERS port flag already
makes sure this is done for use.
We used to have to do this here to ensure the mixer could find the data
pointer and not error out. Now that the mixer can MMAP, this can go.
See #4918
There are really 2 options for the buffer allocation:
1. allocate the buffers skeleton and meta/chunk/data in malloc memory.
This is when the PW_BUFFERS_FLAG_SHARED is unset.
2. allocate buffers skeleton in alloc memory and the meta/chunk/data
in shared memory when the PW_BUFFERS_FLAG_SHARED is set.
Optionally the data can be left unallocated in both cases when the
PW_BUFFERS_FLAG_NO_MEM is set. In this case we also need to pass the
SPA_BUFFER_ALLOC_FLAG_NO_DATA flag to allocator or else it will set the
data pointers to 0 sized memory in the skeleton.
If we use SHARED and we allocated memory, we can also set the MemFd and
mapoffset into our shared mem. We can do this even if the data_type is
MemPtr.
We can decide on the datatype to use earlier, based on the negotiated
flags. In the MemFd case, make sure the buffer data is page aligned in
that case to make things easier. Also force everything in SHARED mem
when the data is in SHARED mem. We also don't need to
PW_BUFFERS_FLAG_SHARED_MEM because we work with the negotiated flags
now to decide if SHARED mem is needed or not.
With this change, a node port could provide a MemFd data_type mask in
the Buffers param and this would negotiate shared mem with the mixer.
Previously, it would only ever allocate malloc memory.
See #4918
When we have a mixer node and we need to negotiate buffers between the
mixer and the node, take the CAN_ALLOC flag into account.
This is for input ports, which can have a mixer. If you make a filter
with a CAN_ALLOC input port, it will now not already contain buffer
data.
See #4918
Find leaf nodes by looking at the number of max in/out ports and the
link group. This should give us nodes that only consume/produce data.
If a leaf node is linked to a driver with only passive links, it will
never be able to be scheduled unless we also make it runnable when the
driver is made runnable from another node.
This can happen when you do:
pw-record -P '{ node.passive=true }' test.wav
and then
pw-record test2.wav
Without this, the first pw-record would never be scheduled. With the
patch it will be scheduled when the second pw-record is started.
Fixes#4915
When clients connect with IP, add the peer IP address to properties. We
might use this later to make a better stream node.name than a copy of the
client application name.
When we fire the timer event, mark the next timeout as NULL because
nothing else is going to timeout anymore until we rearm the timer.
This has the effect that if we cancel and add the same timer from the
callback that we will reprogram the timer with the new timeout instead
of thinking the item as already programmed.
Now that the server asks for the right amount of samples for DSD, just
give it the right amount of samples without doing some weird scaling.
Make a method to calculate the size (stride) of one sample, which
depends on the interleave and channels of the stream.
See !2540
Don't update info.props all the time, just once when we create the
properties, the dict will not change after that.
Move the port property check code to a new function. Keep track if we
auto generated path, name or alias and if we explicitly update it or
not.
Listen for node property changes and update the port properties if
necessary. Some of the port properties or feature depend on the node
properties so we want to keep those in sync.
Make 2 new node properties to make all ports of a node terminal or
physical.
Skip the monitor ports for this, though, they can never be terminal or
physical.
This is important for JACK clients that often enumerate physical
terminal ports in order to link to them and with this you can make JACK
clients link to virtual sinks and sources as well.
Add a new features property to the metadata param. This should be
of type CHOICE_FEATURES_Int and should contain the extra features
supported by this metadata.
Make a special features metadata type that is a combination of the
metadata type in the upper 16 bits and the features for that type in the
lower 16 bits. Make a function to search if a type has certain feature
bits.
On the server, when negotiating buffers and metadata, check the result
of the features after filtering and if they are not 0, place them as
0 sized extra feature metadata on the buffer.
Add some metadata features for the sync_timeline, one that specifies
that the RELEASE flag is supported. With this in place, a producer can
see if a consumer supports the UNSCHEDULED_RELEASE flag.
See #4885
Count the params as we add them to the param arrays and use that to
update the stream params instead of using hardcoded indexes and sizes.
This makes it easier to add params and it also revealed a miscounted
param.
Initialize the mix_hooks, port_map and latency earlier, before we call
pw_impl_port_set_mix() and update_info, that could potentially expect
this to be initialized.
Driver output streams will start the cycle with a _trigger() operation,
which will call the process function (if necessary) to dequeue/queue a
buffer before starting the graph cycle. At the end of the cycle, the
internal stream process function is called again to recycle any buffers
but we should not try to dequeue a new buffer (if there was any in the
queue) and say that we have data.
Do this by keeping track of when the internal process function was
called because of trigger or because of the end of the cycle. At the end
of the cycle, we can call the trigger_end() but we should not prepare a
new buffer on the output io.
Use the timer queue for scheduling stream and object data timeouts.
This avoids allocating timerfds for these timeouts and the timer queue
can handle many timeouts more efficiently.
If we don't get a link on a stream, we might never send a create stream
reply. The client handles this fine by timing out after 30s and dropping
the stream, but the server holds on to the pw_stream forever (or until
the client quits).
Let's add a timer to clean up such streams on the server.
Fixes: https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/4901
Wireplumber loads the libcamera nodes into the pipewire server.
We need to remove the RestrictNamespaces option from the service file
to allow libcamera to load sandboxed IPA modules.
Add a port.exclusive flag and inherit the value from the node.exclusive
flag if not otherwise specified.
Make it so that exclusive ports can only be linked once. This is
important for explicit sync where there can be only one producer and one
consumer in order to signal the timeline objects correctly.
pw_stream now handles the other (output) latency for us, it will keep
the param and report it. If we are not interested in upstream latency we
don't have to parse and store it and we can just be concerned with the
latency we report on our input port (input latency).
Update the scheduling doc with some information about how async
scheduling works. Also add something about the latency.
Async links add 1 quantum of latency so take that into account when
aggregating latencies.
Also a source directly linked to an async node does not add latency
(we evaluate the tee before incrementing the cycle so that it effectively
is executed in the previous cycle and consumed immediately by async
nodes). We can do this because the driver source always provides data
before the async node, and never concurrently.
Add a listener to the link for the node driver change as well because
that can now influence the latency for async nodes.
do_node_unprepare runs in both the server and the client when a node is
stopped. On the server size, set the status to FINISHED and trigger any
targets. This ensures the node will not be scheduled in this cycle
anymore. We have to do this because we can't know if the node is still
alive or not.
When the client receives the stop message, it will unprepare and set the
status to INACTIVE. This ensures the driver will no longer trigger the
node. If the server didn't already trigger the targets, do this in the
remote node then.
This avoid a race where both the client and the server are setting the
status and if the INACTIVE state is set by the server, it might stall
processing of the client.
Fixes#4840
The docs say that a requested size of 0 can be returned and it means
that there is no suggestion for the size.
Make this so by decoupling the requested size value and the triggering
of the process callback. If we have no rate_match and no quantum
(because the driver didn't set it) we still want to schedule with a 0
requested size.
Previously the pointer was determined as follows:
mm->this.ptr = SPA_PTROFF(m->ptr, range.start, void);
however, when `pw_map_range` is calculated, `pw_map_range::start` is the offset
from the beginning of the first page, starting at `pw_map_range::offset`.
This works correctly if `memblock_map()` runs because that will map the file
with expected offset, so using `range.start` is correct.
However, when a mapping is reused (i.e. `memblock_find_mapping()`) finds something,
then `range.start` is not necessarily correct. Consider the following example:
* page size is 10
* one memblock with size 20 (2 pages)
* the applications wants to mappings:
* (offset=5,size=10)
* (offset=15,size=5)
After the first request from the application, a `mapping` object is created
that covers the first two pages of the memblock: offset=0 and size=20. During
the second request, the calculated `pw_map_range` is as follows:
{ start = 5, offset = 10, size = 10 }
and the only previously created mapping is reused since (0 <= 5) and (10 <= 20). When
the pointer of the mapping is adjusted afterwards it will be incorrect since `m->ptr`
points to byte 0 on page 0 (instead of byte 0 on page 1 -- that is assumed). Thereforce
the two will unexpectedly overlap.
Fix that by using `offset - m->offset` when adjusting the mapping's pointer. Also move
the `range` variable into a smaller scope because it only makes sense there. And add
a test that check the above previously incorrect case.
Fixes: 2caf81c97c ("mem: improve memory handling")
Fixes#4884