In case of encoded video we get n_planes as 0 from the video info so
passing that as n_datas is failing during the buffer negotiation. Make
sure to use an appropriate value based on whether we have raw video or
not.
Co-authored-by: Taruntej Kanakamalla <taruntej@asymptotic.io>
Add support for FairPlay SAP v2.5 (encryption type 5) type devices such as Apple Home Pod Minis.
Apparently only these devices require the `POST /feedback` heartbeat, so fix that.
When PW source is used with something like Camera and the camera is
disconnected, all buffers are removed and stream will be paused.
When using PW sink with source, the sink side pipeline can go to EOS.
This again results in all the buffers being removed and stream being
paused on the source side. PW source side pipeline can also crash if
the sink was in the middle of frame copying a buffer to render which
got removed.
Handle this scenario by sending a flush-start event at the start of
buffer removal and flush-stop at the end followed by an end of stream
or pipeline error depending on user selection.
Initialize the byte array with bytes instead of a string because the 0
byte at the end of the string does not fit in the array and causes a
compiler warning.
For a pipeline like below, we might want to dynamically switch the audio
source.
gst-launch-1.0 -e pipewiresrc autoconnect=false ! queue ! audioconvert ! autoaudiosink
On switching to a different audio source, any one of driver, quantum
or clock rate might change which changes the return `result` value of
gst_pipewire_clock_get_internal_time.
This can result in the basesrc create function incorrectly waiting in
gst_clock_id_wait. We post clock lost message to fix this. In the case
of gst-launch, it will set the pipeline to PAUSED and then PLAYING to
to force a new clock and a new base_time distribution.
Without the clock lost message, the following can be seen
before re-linking to a different source
0:00:30.887602864 79499 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:17.024565416 now 0:00:17.024109144 diff (time-now) 456272
after re-linking to a different source
0:00:45.790843245 79499 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:31.927694059 now 0:00:17.066883864 diff (time-now) 14860810195
With the clock lost message, the following can be seen
before re-linking to a different source
0:01:09.336533552 89461 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:00:58.198536772 now 0:00:58.197444926 diff (time-now) 1091846
after re-linking to a different source
0:01:21.659827958 89461 0x7fffe8000d40 DEBUG GST_CLOCK gstsystemclock.c:1158:gst_system_clock_id_wait_jitter_unlocked:<pipewireclock0> entry 0x7fffd803fad0 time 0:28:24.853517646 now 0:28:24.853527204 diff (time-now) -9558
Note the difference in `time` and `now` fields of the above log message.
This is easy to reproduce by using a pipewiresink as the audio source
with a pipeline like below, as one of the sources during switching.
gst-launch-1.0 -e audiotestsrc wave=ticks ! audioconvert ! audio/x-raw,format=F32LE,rate=48000,channels=1 !
pipewiresink stream-properties="props,media.class=Audio/Source,node.description=pwsink" client-name=pwsink
Applications need to handle the GST_MESSAGE_CLOCK_LOST message in their
bus handlers.
Let's make sure we own the memory in buffers, so that we can be
resilient to the PW link going away. This currently maintains the status
quo of copying data into the pipewirepool for sending to the remote end,
but moves the allocation of buffers so that ownership is maintained by
the sink in all cases.
There are some tricky corners, especially with bufferpool vs. buffers
param negotiation -- bufferpool parameters can be negotiated in
GStreamer before the link even comes up, so we try to adapt the buffers
param to use the negotiated value. For now, that is more brittle than
tying those two aspects together. We can revisit this if we can find a
way to tie pipeline state and link state more closely.
Co-authored-by: Arun Raghavan <arun@asymptotic.io>
A number of changes for correctness.
1) We expose the actualy min and max values we support in the
allocation query.
2) We don't support max_buffers as 0, as unlimited buffers is not an
option
3) In ParamBuffers, we request the max_buffers from bufferpool config,
as we cannot dynamically allocate buffers
We need to make sure the memory sizes are correctly initialised so the
meta makes sense, and we don't copy the meta from the input buffer as
that doesn't make sense given we have our own meta already.
Setting the default size to 0 and outside of the min/max range now means
that there is no suggestion for the size and it should use the
suggestion of the peer.
Counter-intuitive as it seems, when we are driving the clock, we can't
also provide a clock from PipeWire to the pipeline -- we need the
pipeline to drive the graph.
So we make the mode control whether we provide a clock or not.
When using PW source, one might want to dynamically link PW source to
a different source. Setting possible_caps to NULL prevents the caps
intersect from returning a successful result on format change. Do not
set possible_caps to NULL as we get that from peer caps which should
stay the same ideally for the duration of pipeline run. That allows
re-linking PW source any number of times with a pipeline like below.
gst-launch-1.0 pipewiresrc autoconnect=false ! queue ! video/x-raw,format=YUY2 ! videoconvert ! xvimagesink
The above pipeline can be made to switch between a camera source and a
screen capture source like wf-recorder.
Note that this fix only improves the status quo and won't work if the
peer caps change due to a re-negotiation.
We might end up in a situation where depending on the pipeline,
intersect might not give us fixated caps.
Possible example of such a pipeline can be below.
gst-launch-1.0 -e pipewiresrc target-object=<path> ! audioconvert !
audio/x-raw,format=S16LE,rate=48000,channels=2 ! lamemp3enc !
filesink location=test.mp3
This results in non-fixated caps like below when intersecting caps from
format param and possible_caps which depends on what we have downstream
in the pipeline.
audio/x-raw, layout=(string)interleaved, format=(string)S16LE, rate=(int)48000, channels=(int)2, channel-mask=(bitmask)0x0000000000000003;
audio/x-raw, layout=(string)interleaved, format=(string)S16LE, rate=(int)48000, channels=(int)2
To fix this, fixate the caps explicitly.
The device passed to gst_device_provider_device_add() is transfer:floating, so
we need increase its ref, otherwise the pointer we keep internally will be a
dangling ref.
Also gst_device_provider_device_remove() doesn't actually release the device, so
we have to do it ourselves.
Fixes#4616
In case negotiation is first attempted with unfixed caps, bufferpool support was
unconditionally disabled. Then at a second caps negotiation attempt it wasn't
restored according to the property value.
Solution suggested by Xi Ruoyao.
The dbus user service is required for various features - the summary says:
'dbus (Bluetooth, rt, portal, pw-reserve)'
On session logout the dbus service gets shut down while the Pipewire one
relies on a timeout. If a user logs in again before PW timed out, the
later stays alive but doesn't handle re-connecting to the dbus service
of the new session, breaking the camera portal and potentially other
features.
Thus hard-depend on the dbus service (if enabled at build time) and thus
shut down together with it.
The midi events have their large data offsets relative to the start of
the buffer and the large data is at the end of the buffer. Because we
copied it down, right after the events, but we didn't adjust the
offsets, calculate a correction offset when unpacking the events.
SysEx in UMP can span multiple packets. In MIDI1 we can't split them up
into multiple events so we need to collect the complete sysex and then
write out the event.
Fixes SysEx writes to ALSA seq by running the event encoder until a
valid packet is completed.
Also fixes split MIDI1 packets in the JACK API when going through the
tunnel or via netjack.
Add jack.connect-audio and jack.connect-midi to specify an array of port
names to link to instead of the default phyisical ports.
Also actually fixes linking the midi ports correctly.
The `access(2)` based multi-user mediation mechanism doesn't quite work
for the root user, which may cause it to conflict with a running
foreground user session. Prevent this by not running the user service at
all for the root user, which nobody should be doing anyway.
Commit b160a72018 introduced this change
before, but it was omitted in e1e0a886d5.
This makes again sure we don't call process callback while disconnecting
stream.
Fixes#3314
In case of the video, if the buffer to be rendered is from upstream and
not from the pipewirepool, map the memory into video frames and copy the
frames instead of doing a buffer copy.
Avoid splitting of buffers in the case of video, because that might break
the frame layout, especially planar formats, for the applications which
use pipewiresink as a camera source to capture video.