In case of encoded video we get n_planes as 0 from the video info so
passing that as n_datas is failing during the buffer negotiation. Make
sure to use an appropriate value based on whether we have raw video or
not.
Co-authored-by: Taruntej Kanakamalla <taruntej@asymptotic.io>
When PW source is used with something like Camera and the camera is
disconnected, all buffers are removed and stream will be paused.
When using PW sink with source, the sink side pipeline can go to EOS.
This again results in all the buffers being removed and stream being
paused on the source side. PW source side pipeline can also crash if
the sink was in the middle of frame copying a buffer to render which
got removed.
Handle this scenario by sending a flush-start event at the start of
buffer removal and flush-stop at the end followed by an end of stream
or pipeline error depending on user selection.
For a pipeline like below, we might want to dynamically switch the audio
source.
gst-launch-1.0 -e pipewiresrc autoconnect=false ! queue ! audioconvert ! autoaudiosink
On switching to a different audio source, any one of driver, quantum
or clock rate might change which changes the return `result` value of
gst_pipewire_clock_get_internal_time.
This can result in the basesrc create function incorrectly waiting in
gst_clock_id_wait. We post clock lost message to fix this. In the case
of gst-launch, it will set the pipeline to PAUSED and then PLAYING to
to force a new clock and a new base_time distribution.
Without the clock lost message, the following can be seen
before re-linking to a different source
0:00:30.887602864 79499 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:17.024565416 now 0:00:17.024109144 diff (time-now) 456272
after re-linking to a different source
0:00:45.790843245 79499 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:31.927694059 now 0:00:17.066883864 diff (time-now) 14860810195
With the clock lost message, the following can be seen
before re-linking to a different source
0:01:09.336533552 89461 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:58.198536772 now 0:00:58.197444926 diff (time-now) 1091846
after re-linking to a different source
0:01:21.659827958 89461 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:28:24.853517646 now 0:28:24.853527204 diff (time-now) -9558
Note the difference in `time` and `now` fields of the above log message.
This is easy to reproduce by using a pipewiresink as the audio source
with a pipeline like below, as one of the sources during switching.
gst-launch-1.0 -e audiotestsrc wave=ticks ! audioconvert ! audio/x-raw,format=F32LE,rate=48000,channels=1 !
pipewiresink stream-properties="props,media.class=Audio/Source,node.description=pwsink" client-name=pwsink
Applications need to handle the GST_MESSAGE_CLOCK_LOST message in their
bus handlers.
Let's make sure we own the memory in buffers, so that we can be
resilient to the PW link going away. This currently maintains the status
quo of copying data into the pipewirepool for sending to the remote end,
but moves the allocation of buffers so that ownership is maintained by
the sink in all cases.
There are some tricky corners, especially with bufferpool vs. buffers
param negotiation -- bufferpool parameters can be negotiated in
GStreamer before the link even comes up, so we try to adapt the buffers
param to use the negotiated value. For now, that is more brittle than
tying those two aspects together. We can revisit this if we can find a
way to tie pipeline state and link state more closely.
Co-authored-by: Arun Raghavan <arun@asymptotic.io>
A number of changes for correctness.
1) We expose the actualy min and max values we support in the
allocation query.
2) We don't support max_buffers as 0, as unlimited buffers is not an
option
3) In ParamBuffers, we request the max_buffers from bufferpool config,
as we cannot dynamically allocate buffers
We need to make sure the memory sizes are correctly initialised so the
meta makes sense, and we don't copy the meta from the input buffer as
that doesn't make sense given we have our own meta already.
Counter-intuitive as it seems, when we are driving the clock, we can't
also provide a clock from PipeWire to the pipeline -- we need the
pipeline to drive the graph.
So we make the mode control whether we provide a clock or not.
When using PW source, one might want to dynamically link PW source to
a different source. Setting possible_caps to NULL prevents the caps
intersect from returning a successful result on format change. Do not
set possible_caps to NULL as we get that from peer caps which should
stay the same ideally for the duration of pipeline run. That allows
re-linking PW source any number of times with a pipeline like below.
gst-launch-1.0 pipewiresrc autoconnect=false ! queue ! video/x-raw,format=YUY2 ! videoconvert ! xvimagesink
The above pipeline can be made to switch between a camera source and a
screen capture source like wf-recorder.
Note that this fix only improves the status quo and won't work if the
peer caps change due to a re-negotiation.
We also need to close the SynObj fd we got, just like we close any
DmaBuf or MemFd.
Make sure we get a compiler error when we add more items to the
data type enumeration later.
Fixes#4807
Reset buffers when deactivating to avoid having old data in the
ringbuffers, which also adds latency when activated again.
Clear sink_ready and capture_ready when resetting buffers to avoid
calling process() before there is new data to process.
capture and sink streams may start before playback stream so process()
may fail to dequeue a playback buffer. In that case advance the read
pointers to avoid building up latency in the ringbuffers.
Because we do the processing of the graph in the playback process
function, only do graph reset and reconfigure from the playback state
change so that we don't have process and state change at the same time
and crash.
When stream is paused, internal delay buffers were cleared, but some
data could stay in stream output queue. Without a flush, these data where
played in front of a new data.
Patch was inspired by 64d6ff4184 fixing the
same issue in a filter-chain module.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
Combine stream selects the biggest latency from all output streams and sent
the latency upstream. To select the biggest latency, each stream needs to have
the sample rate and the quantum size set.
The combine stream recalculates the latency in the latency changed callback
or during data processing.
Stream sets the sample rate and the quantum size in a copy_position call
which is normally called during processing the output data or when state
changes to streaming.
Before this change, it wasn't guarantee the copy_position was called for
each stream already and latency in the combine stream was selected from
random stream.
Signed-off-by: Martin Geier <martin.geier@streamunlimited.com>
When _probe() is called, take a ref to the newly created devices instead
if sinking the floating ref, since gst_clear_object() is called when
core is disconnected. Otherwise the devices will be freed before the
caller gets them.
Fixes the following assert in the caller:
g_object_is_floating: assertion 'G_IS_OBJECT (object)' failed
Or sometimes a segfault with the backtrace:
0 g_type_check_instance_is_fundamentally_a (type_instance=type_instance@entry=0x116c1b0, fundamental_type=fundamental_type@entry=80) at /usr/src/debug/glib-2.0/2.84.0/gobject/gtype.c:3918
1 0xb6d40cc6 in g_object_is_floating (_object=0x116c1b0) at /usr/src/debug/glib-2.0/2.84.0/gobject/gobject.c:3843
2 0xb6bc4c74 in gst_device_provider_get_devices (provider=0x109ba00) at /usr/src/debug/gstreamer1.0/1.24.12/gst/gstdeviceprovider.c:426
The change from 1 to 8 was done without justification in the commit
message and possibly for debug purposes. Unfortunately it breaks
negotiation with the libcamera virtual pipeline, which defaults to
4 buffers.
Set the the value to 1 again as successful negotiation - even with an
unusually low number of buffers - is usually more desirable than an
error.
Fixes: 98b7dc7c0 ("gst: don't do set_caps from the pipewire callback")
(cherry picked from commit e81fb77322)
Don't set the UMP type flag on the format. Use the negotiated types flag
to decide what format to output. Add support for output to old style
midi.
Set the UMP type flag only on the new mixer and JACK when UMP is
enabled.
This ensures that only new (or explicitly requesting) apps get UMP and
old apps receive old midi.
This makes JACK running on 1.2 in flatpaks work with midi again.
There is no need to encode the potential format in the format.dsp of
control ports, this is just for legacy compatibility with JACK apps. The
actual format can be negotiated with the types field.
Fixes midi port visibility with apps compiled against 1.2, such as JACK
apps in flatpaks.
Only react to the capture stream state change and format changes. The
playback and capture streams change state somewhat concurrently and so
the graph state might not be consistent.
Because only the capture stream will trigger the playback stream and
start the processing, we can safely react to capture stream changes
only.
The params contain the send/recv streams from the point of view of the
manager (and not the driver as was assumed before). This means we need
to swap send/recv in the driver, not the manager.
This makes things interoperate with JACK/netjack2.
See #4666
Destroy the sources from the io handler immediately when there is an
error so that we don't end up in endless error wakeups.
Schedule the free from the main loop and make sure only one can ever
run.
The manager is actually not supposed to decide much about the number of
audio and midi ports. It should just suggest a default when connecting
driver doesn't know.
Add a audio.ports parameters to manager and driver to suggest/ask for
the amount of audio ports. Let the audio.position/audio.channels be a
specification of the channel mask in case it matches the requested
channels, otherwise use AUX channels for the ports.
This means that we must derive the mode (sink/source/audio/midi) from
the ports that are negotiated in the manager and the driver, so delay
this until after negotiation.
Make sure all the possible modes work. For midi only streams, we can't
wait for the session manager to perform a PortConfig so do that
ourselves. Make sure we only use a source trigger when we have a sink.
Fixes#4666