The video-play-fixate example will downgrade the stream to MemFd one
modifier at a time. Sometimes it's useful to test with not downgrading;
to avoid having to depend on actual DRM devices (real or virtual), fake
them by using memfd and mapping them in the sink.
A DMA buffer from a DRM device are typically accessed using API related
to a DRM device, e.g. Vulkan or EGL. To create such a context for using
with a PipeWire stream that passed DRM device DMA buffers applications
have so far usually guessed or made use of the same context as the
stream content will be presented. This has mostly been the Wayland
EGL/Vulkan context, and while this has most of the time worked, it's
somewhat by accident, and for reliable operation, PipeWire must be aware
of what DRM device a DMA buffer should be accessed using.
To address this, introduce device ID negotation, allowing sources and
sinks to negotiate what DRM device is supported, and what formats and
modifiers are supported by them.
This will allow applications to stop relying on luck or the windowing
system to figure out how to access the DMA buffers. It also paves the
way for being able to use multiple GPUs for different video streams,
depending on what the sources and sinks support.
When suspend_on_idle is set and we got to idle, there is a chance that
there is work in the work queue that is dependent on formats being set.
In suspend_node, check whether the links have a non-zero busy count before suspending and
return -EBUSY if they do.
Make the parametric-equalizer module destroy the underlying filter-chain
module on destruction. This makes the EQ nodes get destroyed on unload.
Fixes#5045
A `pw_core` may be shared between multiple streams, device provider
instances, thus when the reference of the given component to the core
is dropped, the event handlers must be unregistered so as to avoid
use-after-free and similar issues.
Fixes#5030
Fixes: 2bc3e0ca10 ("gst: deviceprodiver: Use GstPipeWireCore and some cleanups")
Move the latency print code after where we print the port. That way
we only get the latency when we first print the port.
Avoid -lt from printing latencies for ports without a link.
If in PAUSED state, the node can move from idle to suspended resulting
in format cleared and state is no longer negotiated. To avoid returning
not-negotiated error upon basesrc calling create callback, wait for new
format to be provided and negotiated state is back.
WirePlumber recently added a mechanism to force mono mixdown on audio
outputs, which is a useful feature for accessibility. Let's also expose
that setting via libpulse for existing audio settings UIs to be able to
use.
Pipewire uses a rate of 256/7680 with the integrated camera of Apple
silicon Macbooks. To calculate pw_time.delay correctly in this case it
has to be divided by time->rate.num. Without this division the delay
contribution of the `((latency->min_ns + latency->max_ns) / 2)` term
ends up as 255 which are 8.5 seconds.
pipewiresrc reports the delay as latency in the gstreamer pipeline which
results in rendering a frame every 8.5 seconds.
I suspect the non-normalized rate of 256/7680 is another bug in
pipewire. The rate for an UVC webcam is reported as 1/30. Both
Video4Linux2 devices report a discrete frame interval of 0.033s (30fps).
Fixes#4957
In the current state the GET/SET stream format can handle the commands
response however, yet, it does not take care of checking that:
* A bound input stream cannot have it set, should reply accordingly
* A STREAMING_STREAM output stream cannot have it set, should reply
accordingly.