Previously, when a sample is "committed" from an upload stream,
its reference count is set to 1. This is problematic if,
when the sample is committed for a second time, there are
streams playing the sample as the reference count will go out
of sync.
The problem can be easily triggered, especially with longer samples:
pactl upload-sample a-long-sample.ogg
pactl play-sample a-long-sample
pactl play-sample a-long-sample
pactl upload-sample a-long-sample.ogg # while playing
When the first stream finishes playing, it will free the sample,
which can cause problems e.g. for the second stream playing the
sample at that very moment.
Fix that by decoupling the buffer from the sample and making
the buffer reference counted. And remove the reference counting
from the samples as it is no longer needed.
Futhermore, previously, the reference counts were ignored
when the removal of a sample was requested. That is fixed
as well.
The previous issue can be triggered easily as well:
pactl upload-sample a-long-sample.ogg
pactl play-sample a-long-sample
pactl remove-sample a-long-sample # while playing
Fixes#1953
Create a new event for modules ('destroy') which is emitted from
`module_free()`. It is used by the module loading logic, to handle
when a module is destroyed without properly loading first.
Store the modules whose load has been initiated by a particular
client in the `pending_modules` list of the client. When the
client disconnects, "detach" the client from the pending module
objects. This way the reference count need not be increased
for asynchronous module loads.
Furthermore, if the module can load synchronously, do not create
the pending module object at all.
Only call `spa_list_remove()` in `stream_free()` if the
stream is pending. `spa_list_remove()` does not reinitialize
the list node, therefore calling `spa_list_remove()` again
after the stream has been removed from the pending list
will corrupt the pending list of the client.
When we can't fill a complete block, report the amount of data that we
used in missing/played instead of the complete missing part.
Fixes audio breaking up when looping in mpv.
Fixes#1132
After we get a reposition request, bring the state to the SYNC state
again so that clients can align with the new position.
Fixes a problem with reposition when using the jack transport.
Fixes#1907
Always reevaluate the tlength or total buffered samples when the
quantum changes, even for the first sample because it is possible that
we completely miscalculated the length when we started, like when the
quantum is force high and the requested latency is low.
Also only increase the calculated tlength, for smaller sizes we don't
need to do anything, we can keep the latency as is it.
See #1930
When we are draining or underrunning, read whatever we have in the
ringbuffer instead of silence. This places the last samples before
the drain into the sink, padded with 0.
Fixes#1549
This reverts commit c14e89a578.
This makes it impossible for flatpak apps to remove links. Maybe:
- Flatpaks apps are not allowed to make lingering links
- Flatpak apps can only delete their own links.
- Some flatpaks apps needs to be tagged as manager in order to created
lingering links and destroy any link.
Fixes#1920
When we have a fix_* flag set, make an extra format description with the
wildcards. This makes it possible for the session manager to fall back
to something when selecting a target and format.
Also only advertize the valid pulseaudio formats for the wildcards.
Fixes#1912
Only update the quantum/rate when we have a pending change.
This works around a bug in sco-source that changes the quantum
by itself but in any case, this optimization is nice to have.
See #1905
When moving a driver to another, move the quantum and rate to the
current_ fields so that they are applied when the next cycle starts
instead of during the cycle.
The `pw_*_info` structures in core pipewire all have 64-bit change
masks. Convert the change masks in the session manager extension
to 64-bit as the differing sizes can cause problems.
This introduces an API and ABI break unfortunately, but due to
the limited number of users of the session manager extension,
it was deemed safe.
See wireplumber#49
Keep track of the current quantum and recalculate the tlength in the
same way that pulseaudio does.
Send a bufferattr changed message to a client when we change the
parameters.
This fixes the case where the quantum is increased and there needs to be
more buffering to keep the stream going.
Because we keep everything in a ringbuffer and provide exactly the
required amount of data, we can use 1/4 buffers.
Also increase the buffer size. We don't want to limit the buffer size
to the negotiated tlength because it can be increased later. Instead
scale it to the max quantum size (8192) with a max resample rate of 32.
This reverts commit 1b94b66924.
It causes problems with qemu.
Without this patch, paplay --latency-msec=1 /some.wav hangs when
forcing the quantum to 8192. A different fix will be needed.
Use the new TRIGGER flag on the stream to ensure that the source and
playback streams only get scheduled after we process their input
streams, the sink and capture.
The trigger flag adds an extra dependency on the node so that it does
not automatically get scheduled. A manual scheduling is required with,
for example pw_stream_trigger_process().
This can be used to create an artificial dependency between a sink
stream and a source stream, like when using loopback or filter-chain.
Normally those streams are not linked in the graph but they have an
internal dependency. Without any such dependency, the source part of the
chain will be scheduled first and then the sink part and we get a
cycle of delay (with possible quantum change etc).
With this patch, the sink part will be scheduled first and its process
function will trigger the 'downstream' source stream explicitly. The
sink and source stream will stay in sync and will use the same quantum.
This reduces the latency and glitches because of quantum changes.
Fixes#1873
Don't directly update the quantum and rate in the driver position
when recalculating the graph or else clients might see different values
during one cycle.
Instead update another variable and copy this into the position when
we start a new cycle.