The simple formats contain some common mappings for other extensions such
as mp3.
Makes pw-record test.mp3 actually write an mp3 instead of a wav file.
Since commit a1f33a99df changed buffer handling to create new GstBuffers
instead of reusing pool buffers, the video crop metadata was silently
lost. The code used gst_buffer_get_video_crop_meta() which returns NULL
on a fresh buffer, so the crop values from PipeWire were never applied.
Change to gst_buffer_add_video_crop_meta() to actually attach the
metadata to the buffer.
Also remove the now-obsolete call in gst_pipewire_pool_wrap_buffer.
This was discovered when using the XDG Desktop Portal's RemoteDesktop
interface: the full desktop was being delivered instead of just the
selected window, because the crop region metadata was not being
propagated to the GStreamer buffer.
Fixes: a1f33a99df ("gst: dequeue a shared buffer instead of original
pool buffer"), from merge request !1258
CC: @jameshilliard
When the graph has no inputs and the channels is set to 0, don't create
a capture stream. Likewise, don't create a playback stream when there
are no graph outputs and the output channels is 0.
You can use this to make a sine source or a null sink.
There is no reason to fail when there is no input or output port.
We can simply run the graph with what there is. Even if there is no
input or output at all, running one instance of the plugins is
possible.
Add a busy builtin plugin that has no ports and keeps the CPU IDLE or
busy for the give percent.
The filter graph has, after parsing, a default number of input and
output ports. This is based on the description or the first/last element
input and output ports. Pass this information in the properties when
we emit the info.
Don't use the number of configured input/output ports as the default
number of channels in filter-chain because this is only determined after
activating the graph. Instead, use the default input/output channels.
The result is that when you load filter-chain without any channel
layout, it will default to the number of input/outputs of the graph
instead of 0. This allows for the node to be visible in the pulseaudio
API.
Fixes#5084
The video-play-fixate example will downgrade the stream to MemFd one
modifier at a time. Sometimes it's useful to test with not downgrading;
to avoid having to depend on actual DRM devices (real or virtual), fake
them by using memfd and mapping them in the sink.
A DMA buffer from a DRM device are typically accessed using API related
to a DRM device, e.g. Vulkan or EGL. To create such a context for using
with a PipeWire stream that passed DRM device DMA buffers applications
have so far usually guessed or made use of the same context as the
stream content will be presented. This has mostly been the Wayland
EGL/Vulkan context, and while this has most of the time worked, it's
somewhat by accident, and for reliable operation, PipeWire must be aware
of what DRM device a DMA buffer should be accessed using.
To address this, introduce device ID negotation, allowing sources and
sinks to negotiate what DRM device is supported, and what formats and
modifiers are supported by them.
This will allow applications to stop relying on luck or the windowing
system to figure out how to access the DMA buffers. It also paves the
way for being able to use multiple GPUs for different video streams,
depending on what the sources and sinks support.
When suspend_on_idle is set and we got to idle, there is a chance that
there is work in the work queue that is dependent on formats being set.
In suspend_node, check whether the links have a non-zero busy count before suspending and
return -EBUSY if they do.
Make the parametric-equalizer module destroy the underlying filter-chain
module on destruction. This makes the EQ nodes get destroyed on unload.
Fixes#5045
A `pw_core` may be shared between multiple streams, device provider
instances, thus when the reference of the given component to the core
is dropped, the event handlers must be unregistered so as to avoid
use-after-free and similar issues.
Fixes#5030
Fixes: 2bc3e0ca10 ("gst: deviceprodiver: Use GstPipeWireCore and some cleanups")
Move the latency print code after where we print the port. That way
we only get the latency when we first print the port.
Avoid -lt from printing latencies for ports without a link.
If in PAUSED state, the node can move from idle to suspended resulting
in format cleared and state is no longer negotiated. To avoid returning
not-negotiated error upon basesrc calling create callback, wait for new
format to be provided and negotiated state is back.
WirePlumber recently added a mechanism to force mono mixdown on audio
outputs, which is a useful feature for accessibility. Let's also expose
that setting via libpulse for existing audio settings UIs to be able to
use.
Pipewire uses a rate of 256/7680 with the integrated camera of Apple
silicon Macbooks. To calculate pw_time.delay correctly in this case it
has to be divided by time->rate.num. Without this division the delay
contribution of the `((latency->min_ns + latency->max_ns) / 2)` term
ends up as 255 which are 8.5 seconds.
pipewiresrc reports the delay as latency in the gstreamer pipeline which
results in rendering a frame every 8.5 seconds.
I suspect the non-normalized rate of 256/7680 is another bug in
pipewire. The rate for an UVC webcam is reported as 1/30. Both
Video4Linux2 devices report a discrete frame interval of 0.033s (30fps).
Fixes#4957