Let's make sure we own the memory in buffers, so that we can be
resilient to the PW link going away. This currently maintains the status
quo of copying data into the pipewirepool for sending to the remote end,
but moves the allocation of buffers so that ownership is maintained by
the sink in all cases.
There are some tricky corners, especially with bufferpool vs. buffers
param negotiation -- bufferpool parameters can be negotiated in
GStreamer before the link even comes up, so we try to adapt the buffers
param to use the negotiated value. For now, that is more brittle than
tying those two aspects together. We can revisit this if we can find a
way to tie pipeline state and link state more closely.
Co-authored-by: Arun Raghavan <arun@asymptotic.io>
A number of changes for correctness.
1) We expose the actualy min and max values we support in the
allocation query.
2) We don't support max_buffers as 0, as unlimited buffers is not an
option
3) In ParamBuffers, we request the max_buffers from bufferpool config,
as we cannot dynamically allocate buffers
We need to make sure the memory sizes are correctly initialised so the
meta makes sense, and we don't copy the meta from the input buffer as
that doesn't make sense given we have our own meta already.
Counter-intuitive as it seems, when we are driving the clock, we can't
also provide a clock from PipeWire to the pipeline -- we need the
pipeline to drive the graph.
So we make the mode control whether we provide a clock or not.
When using PW source, one might want to dynamically link PW source to
a different source. Setting possible_caps to NULL prevents the caps
intersect from returning a successful result on format change. Do not
set possible_caps to NULL as we get that from peer caps which should
stay the same ideally for the duration of pipeline run. That allows
re-linking PW source any number of times with a pipeline like below.
gst-launch-1.0 pipewiresrc autoconnect=false ! queue ! video/x-raw,format=YUY2 ! videoconvert ! xvimagesink
The above pipeline can be made to switch between a camera source and a
screen capture source like wf-recorder.
Note that this fix only improves the status quo and won't work if the
peer caps change due to a re-negotiation.
We don't actually need to calculate the GCD for each resampler rate
update. The GCD is only used to scale the in/out rates when using the
full resampler and this we can cache and reuse when we did the setup.
The interpolating resampler can work perfectly fine with a GCD of 1 and
so we can just assume that.
spa_alsa_read is called from the source process function when we are a
follower and no buffer is ready yet.
Part of the rate correction was performed by the ALSA driver when it
woke up but now, the resampler has updated the requested size and we
need to requery it before we can start reading samples.
Otherwise, we end up with requested samples from before the rate update
and we might not give enough samples to the resampler. In that case, the
adapter will call us again and we will again try to produce a buffer
worth of the requested samples, which will xrun.
We also need to close the SynObj fd we got, just like we close any
DmaBuf or MemFd.
Make sure we get a compiler error when we add more items to the
data type enumeration later.
Fixes#4807
A2DP v1.4 uses the rfa bits for adding 5.1 and 7.1 configurations.
Clear those bits properly when sending configuration, in case remote
device sets them.
(cherry picked from commit ae7a893ce9)
Significantly better CPU performance in lieu of canceller quality. Not
implemented for 0.x series, as there's a lot more to enable there (such
as routing modes), and I am hoping to drop support for those versions
before too long.
Ensure we have at least a `.` after `audioconvert.filter-graph`, so we
don't try to read past the end if it does not exist.
Cherry-picked from a328e0ae28, dropping
the param doc update as that doesn't exist.
Reset buffers when deactivating to avoid having old data in the
ringbuffers, which also adds latency when activated again.
Clear sink_ready and capture_ready when resetting buffers to avoid
calling process() before there is new data to process.
capture and sink streams may start before playback stream so process()
may fail to dequeue a playback buffer. In that case advance the read
pointers to avoid building up latency in the ringbuffers.
This reverts commit 46dfa69f26.
We do actually need to release the locks for now. The reason is that
pipewire core will at various points do a blocking invoke into the
thread-loop (which is the data-loop when using non-rt/async processing)
to synchronize state. Because these functions are called with the
thread-loop lock and from some other thread (like gstreamer) it causes
a deadlock because the thread-loop is locked and can't run and the
caller is waiting for the thread-loop to complete.
See #4472
Make a new alsa.use-ucm option that sets api.alsa.use-ucm on the device
it creates (when set).
There is some documentation floating around (thr arch wiki) with this
property.
See #4755
Properties of type Id should have a type of the enum with the possible
values associated with them.
The other types that don't have a fixed enumeration but are usually
mapped to some constant/description with PropInfo should be Int.
Fixes!2399
When a linked node needs to be resynced we actually never clear the
flag or reset the dll. Move the code around so that it still does
the reset of the flag and dll without actually doing the resync in
the ringbuffer when it is a linked node.
When we do_prepare, always reset the dll. We already set the alsa_sync
field but that is only used by followers to resync in some cases.
When reseting the dll, we also reset the next_time and base_time values,
we however need to do this before calculating the error in update_time
when we are the driver in IRQ mode or else we get some crazy error
that distorts the rate estimation.
Because we do the processing of the graph in the playback process
function, only do graph reset and reconfigure from the playback state
change so that we don't have process and state change at the same time
and crash.